Data and our world


#1

There have been a lot of changes to this unit. Those changes were well described in a timely manner. As always, I am grateful for the competence and professionalism of code.org.

That said:)

I am concerned about the bias in the parts that are left. In two different 3rd party videos “predictive policing” is described as an application of big data and artificial intelligence. In neither case is predictive policing sufficiently critiqued. There is a strong and well understood body of work critiquing predictive policing as a tool that that exacerbates existing bias in policing specifically it perpetuates and codifies a negative bias against blacks.

Additionally, in the WSJ opinion piece “It’s Modern Trade: Web Users Get as Much as They Give”, students are presented with an opinion that is very corporate friendly. That is of course a good and reasonable thing to do. And, it would be good and reasonable to represent an opposing opinion especially given our world.

The reason why this class is important is to give students the tools they need to think critically about their world. To quote the AP test Explore Task: “Computing innovations impact our lives in ways that require considerable study and reflection for us to fully understand them.” For example, last year we exposed how social media advertising, micro-targeting and filter bubbles were used to proliferate “fake news”. I would love to see this as a counterpoint to the WSJ piece. Maybe that’s just my job?

Finally, I would like to address Big Data and Artificial Intelligence. In the Code.org curriculum resources like those mentioned above, Artificial Intelligence and automation in general is represented as inevitable and faceless as in “the robots will take our jobs”. Zuckerberg did this in his testimony before conference. When he was asked how he will deal with flagging hate speech he said “AI”. The truth is that artificial intelligence is just a computer program written by unavoidably biased humans. (see predictive policing) The computers are not taking our jobs. More accurately, jobs are be eliminated by companies that are deploying technology that automates tasks that were formerly done by humans. Similarly, companies hire humans to write AI programs which are exactly as flawed as the people and institutions that create them. What Zuckerberg was actually saying was not “AI will fix it” but “Facebook will fix it”. Except that is not as convincing because Facebook made the problem in the first place.

Gratefully,

Steve Wright


#2

Hi @steve1 - I also supplement a lot with “real world” articles depending what is on the news. I appreciate your perspective and think it is important to present multiple standpoints. I know my student’s probably think I hate technology because I have them read and listen to a lot of things about the harmful impacts of technology. But I do think the message of “look how amazing technology is” is everywhere. Finding that right balance is tough. Do you have any favorite lessons on current events you can share? I am always looking for new ideas!

Thanks!
KT


#3

I don’t have any good lessons. :frowning:. This is actually the only point where I do “lecture”. I need better resources. I will put together the resources I have and share it. Mostly slides, videos, links.

Something that I have started to really stress this year is the fundamental truth that computers are stupid (I always write it stoopid for some reason). Our world tends to anthropomorphize (word?) tech and it seems to be because we don’t want to recognize our own agency over it. Even metaphors like “the cloud” are fundamentally dishonest. Computers and robots are things built and owned, manipulated and applied by people to do things. Some of those things are good, some are bad. We, the humans, are accountable.


#4

@steve1

A resource you may be interested in is Joy Buolamwini’s work at MIT on algorithmic bias. https://www.media.mit.edu/people/joyab/overview/

Hope that helps,
Andrea


#5

Hey @steve1 thanks as always for the thoughtful commentary on the curriculum. Sharing a few thoughts that I’m hoping are helpful.

  1. Based on some feedback from other teachers we think there may in fact be some resources missing in Unit 4 that were unintentionally excluded when we copied over to the 2018 version. This is on the order of links to articles / videos and not full lesson plans. We’re looking into it to it.

  2. While we did remove a few lessons from the curriculum for CSP 2018, most of the work we did was adding new resources or moving lessons around in the curriculum. I highly recommend checking out Appendix C of the Curriculum Guide [link] for a detailed summary of the work we did. It’s very possible many of the lessons you know and love are just elsewhere in the curriculum.

  3. Last year we prioritized pacing and PT prep because overwhelming feedback from classrooms indicated those were the parts of the course that needed the most work. As a result Unit 4 was primarily updated with this lens and some of the ideas or requests you’re sharing weren’t prioritized. This doesn’t mean we don’t see the value in the changes you’re recommending but I just wanted you to know how we prioritized updates last year. We always have way more projects in our backlog than we can get to and I’ll be the first to admit one of them is improving how we teach data and global impacts topics.

  4. Unit 4 of our current curriculum has a lot more external resources than other units of the course. Whether it’s articles, tools, or videos, we find this approach makes units harder to maintain, more likely to get dated, and harder to align with the look, feel, pedagogy, messaging, etc. that ideally would be part of our course. I agree we want students to be deep thinkers about issues of data, AI, global impacts, etc. and one of the challenges will be figuring out how to do that in a way that works for a national curriculum with thousands of classrooms using it. If you read between the lines here I’m saying “Steve I hear ya and generally agree” :smiley: . I’m also saying when I think about how to effectively fix the situation I’m thinking about more holistic approaches that I know will take more time.

  5. I like using this forum as a way to address the problem in the meantime. If there’s different videos, articles, resources, etc. that you want or want to share that’s why we have this community. I’m hoping more people will add additional articles as well.

Phew! I thought this was going to be a shorter post but alas, here we are. If you’ve read this far please rest assured that your commentary is heard and taken to heart. Thanks as always for writing in!
GT


#6

Thanks for the reply GT. Here’s a bit more specifics for posterity sake.

Here are some good links on “location”. The first three are all about positive uses for tracking cell phone location. The last one is more cautionary.

I am happy to provide context for any/all if it is useful. I worked for four years with the Grameen Foundation on mobile phone technology in the Africa and Latin America.

The impact of Algorithms on humans:

Please promote this video by the amazing Joy Buolamwini (who was recommended above by @anmrobnott )


And this one


as a counterpoint to the TED video you promote in Unit 4 Lesson 1:

I won’t use this video again. I will replace it with Joy’s videos above.

In joy’s video she also advocates for the Algorithmic Justice League:
https://www.ajlunited.org/
Which is brilliant!

It was not easy to find good resources on online advertising other than those telling us how to do it. Here is a good one from the media literacy perspective.

Filter Bubbles
In 2011 Eli Pariser saw 2016


I used this video for the first time this year and was FLOORED at how prescient it was. It was made in 2011 and he was warning that personalization was creating filter bubbles that could be dangerous in terms of not exposing web users to difference and only providing them with recommendation engine choices.

In Unit 4 Lesson 3 you have several articles critiquing Google Flu Trends. Would be great to see the same treatment for Facebook re 2016 election and Cambridge Analytica.

Just as Lesson 3 critiques Flu Trends, it would be good to critique Facebook given the extensive amount of excellent information that is out there about Facebooks failures relative to Cambridge Analytica and how the Russians were able to use Facebook’s advertising tools to influence the 2016 election.

Some videos:
NYT Explainer: https://www.youtube.com/watch?v=mrnXv-g4yKU
BBC more personal Explainer: https://www.youtube.com/watch?v=FxzFR8dz1vw

Finally, I still do the spreadsheet exercises from previous years. Specifically, we fill out a survey every day starting at the beginning of the year about amount of sleep, home work, exercise, etc then I annonymize and add gender and GPA and I teach pivot tables. I did not do a very good job of it this year because I did a poor job of writing my own curriculum. It’s just to hard to do on the fly so I will need to spend the time to do it this summer.

And the Good and Bad visualizations exercise should really be put back in some where: https://studio.code.org/s/csp2-2017/stage/10/puzzle/1?section_id=1625721


#7

Thanks @steve1 for sharing these resources!!! I am planning on doing “excursions” in CS topics in between programming topics and I think this set of resources could a great addition for my students. I am wondering if you could elaborate on how you have students act on this information you share. Do you have them debate things? Do you have a format for a debate? Do you have them make “pro or anti” data sharing propaganda to share in the building? Have them write an opinion article? I am just wondering what other tools teachers have to make “global impact” come alive in their classes!

Thanks again!


#8

I don’t have great lesson plans for this. Impassioned lectures so far. I am doing the Explore Performance Task immediately following the Data, Privacy, Security stuff so this fits in the computational innovation impacts frame.

Just today there is a new article about how Facebook’s AI is selling ads to advance white supremacy. When it was pointed out to them that their AI was doing this, they said “Whoops.”

Zuckerberg told us in his testimony that his AI would save the day, that it would discover all the bad guy stuff and snuff it out. Turns out, his AI is being used to find stuff to sell to anyone that wants to buy it. Profit is the only master. The rest is hand waving.

I have to say, as a teacher at a very diverse school I am absolutely livid at these tech companies. This spike in hate crime and white supremecy rhetoric is exacerbated by the filter bubbles and echo chambers that these platforms intentionally build. I am not saying that this is the outcome they wanted but I am saying that by doing nothing more than using the tools they built in exactly the way they were designed the very worst people are able to directly attact the well being of my students.

This is a negative externality that is directly analogous to the environmental impact of fossil fuels. Additionally, just as the fossil fuel industry must move to alternative fuels, the tech platforms (Facebook, Google, Twitter, Amazon) must move to alternative topologies that help connect us across difference (instead of clustering us by similarity).

Aaaaargh, sorry, soapbox…