My students are working on creating their own app using the data that has been provided. Several students can’t create a model card because it says that there is profanity or they are trying to get PII. How can this be when they are using the provided data?
I just checked and I can’t replicate this. Which data set did they select? If I can replicate it, I can report it. You can also submit a report to email@example.com
I don’t get those errors, so I am currently wondering if your school or district has a filter that blocks content based on key words. The one word I see in all the screenshots is “sex”, so it is possible your filter may be picking up on that word and blocking content.
What if they choose different data sets or different criteria?
I have had things happen like this to me before at school when I go to click on a news headline with similar key words …
Could that be it?
Let me jump in really quickly and say: this is a bug on our side and not something specific to anyone’s classrooms or what students are doing.
Our tools have a built-in profanity filter to detect when students use a set list of words. “Sex” is one of the words on that list, and has been for several years. But, this means some of our datasets that include a “sex” column can’t be saved correctly.
It’s been an open bug on our end to fix this to make some special exceptions for AI datasets and I hoped we’d get to it earlier in the year, but this bug still hasn’t been addressed. Please send an email to firstname.lastname@example.org with this same information and screenshots to help our engineers address this. In the meantime, I’d recommend avoiding columns that include “sex”, and hopefully this will get fixed on our end soon!
Dan - Code.org Curriculum Writer