Karen Hao trained as a mechanical engineer and then joined a Silicon Valley startup, thinking that technology was the best means to create social change. While surrounded by smart people who were also passionate about using technology for social change, she soon discovered there were no incentives or pathways to accomplish this. “When you're inside a technology company and you're thinking this is going to help change the world, you're often blind to unattended consequences of your work,” she says.
She decided to transition to a career in journalism where she could create social change by raising awareness about social impacts of technologies like AI, and how big tech companies engage in “ethics washing” to protect their profits. She is intrigued by the way that incentives shape the work that is done at a systemic level. She says every tech giant suffers from issues at the systemic level where there are people who deeply care about ethics within the organization, but it doesn't mean the company is willing to change the way their profitable technologies work. And employees are disincentivized to do this work because they could be fired.
A high-profile example was the ethical AI team at Google was doing great work critiquing some of Google’s practices and tried to get the paper published. Google refused to let them publish it, censored their research and then fired both of the team leads. It later came out that this was just one instance of academic censorship, but Google had told many other researchers to strike a positive tone when talking about technology being developed by Google.
For her article, How Facebook Got Addicted to Spreading Misinformation, she did a nine-month investigation into Facebook's responsible AI team that was supposed to be understanding and mitigating the unintended consequences of Facebook's algorithms. She found the team was focused on specific unintended consequences that are good for Facebook's growth like AI bias. It completely ignored the most important harms of Facebook's algorithms—misinformation, amplification, polarization, exacerbation, especially in the wake of the January 6 capital riots—because addressing them would undermine Facebook's growth. There have been times when Facebook was not only ignoring or negligent of the issues that its algorithms might be causing, but also purposely undermining some of the efforts to try and fix it because of this tension with the company's growth.
She was glad to see policymakers cite her article at a recent congressional hearing, and hopes Congress has the political will to regulate companies like Facebook. She says it’s also important for every new generation of Facebook employees to become educated about these issues so they will hold the company accountable. She thinks AI research has shifted a bit over the last five years to be more focused on taking responsibility for societal impacts and part of that evolution is being driven by people on the inside who raised awareness and advocated for change.
One of Karen’s inspirations to go into journalism was Rachel Carson's book Silent Spring that sparked a widespread environmental movement. She strives to write stories that activate that same level of change, transforming both the cultural discussions and policy around important issues.