Should we fear AI?

Published 3rd Mar, 2019
27 min read

I originally wrote this for my A Level Extended Project Qualification (EPQ).

Introduction

In recent years the intelligence of computer systems has continued to grow exponentially, allowing them to achieve things once reserved for the world of science fiction. As algorithms begin to surpass human ability in more and more areas, worries about the future grow.

Artificial Intelligence (AI) has already asserted its dominance in many fields. These systems have beaten the world’s best players of Chess and Go, analyse terabytes of data every second from internet traffic to weather data, and are even learning how to drive cars on the roads.

In the light of the first death at the hands of a self-driving car in March 2018, there has been a push to regulate AI development, from experts to the public alike, which has brought even more questions and concerns about this fields future to light. There are not only concerns about automation and the irreversible dependence we now have on technology, but lots of people – quite possibly influenced by science fiction works - worry that creating such independent intelligences could have serious implications for the human race, potentially destroying our society as computers outgrow our human intelligence and control, becoming super-intelligent.

This essay hopes to offer a judgement into whether fears for the rise of AI are justified; to explain why some misconceptions might lead to flawed fears, and outline the positive impacts and legitimate risks Artificial Intelligence poses in the future, and in the present.

What is meant by Artificial Intelligence (AI)?

It is important to outline the field of Artificial Intelligence and relevant key terminology within it before starting to explore the question, so as not to fall victim to the same misconceptions many have about AI, and ensure all arguments can start with a common foundation.

Artificial Intelligence is a notoriously hard field to simplify into a single definition, partly because of the breadth held within this umbrella term, but also as put by McCarthy (1998) “We cannot yet characterize in general what kinds of computational procedures we want to call intelligent.”

In 2019, the Oxford English Dictionary defines Artificial Intelligence as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” This definition suffers from the same inadequacy pointed out by McCarthy, namely we can only be secure in a definition of Artificial Intelligence, if we agree on some definition or idea of intelligence, especially where it pertains to computers. Using the Oxford English dictionary again, intelligence is defined as “The ability to acquire and apply knowledge and skill.” This means we can try and synthesise a definition of an Artificially Intelligent system as: A computer system capable of performing a task or making a decision by applying knowledge and past experience.

Important Terms

Narrow AI

Artificial Narrow Intelligence (ANI) also known as “Weak” AI is the AI that exists in our world today. Narrow AI is AI that is programmed to perform a single task — whether it’s checking the weather, being able to play chess, or analyzing raw data to write journalistic reports. (Jajal 2018)

General AI

Artificial General intelligence (AGI) or “Strong” AI refers to machines that exhibit human intelligence. In other words, AGI can successfully perform any intellectual task that a human being can. This is the sort of AI that we see in movies like “Her” or other sci-fi movies in which humans interact with machines and operating systems that are conscious, sentient, and driven by emotion and self-awareness. (Jajal 2018)

Super-Intelligence

An Artificial General Intelligence that vastly outperforms the best human brains in every significant cognitive domain. (Bostrom 2009)

Machine Learning

The capacity of a computer to learn from experience, i.e. to modify its processing on the basis of newly acquired information. (Oxford English Dictionary)

Why are some views towards AI flawed?

The Problem of Anthropomorphization.

Anthropomorphization is the attributing of human form or traits to something. In the context of AI “something” refers to a computer system, algorithm or perceived intelligent agent. This is a primary influence in people's misguided beliefs about Artificial Intelligence, as it builds up false notions of what these systems are, and indeed what they are actually capable of doing.

For example take personal assistants like Siri, Cortana and Alexa. There is a reason all of these technologies have been given human names. It provides a more authentic interaction between man and machine, and influences the user to trust the system. This is why many robots designed to interact with people are given eyes, mouths and other discernible human idiosyncrasies. It exploits the same human characteristic that means we can feel compassionate towards animals.We perceive an intelligence and personality, even if we have no direct communication or concrete evidence to prove such.

It is this characteristic, but reversed, which then also influence feelings towards Artificial Intelligence. Yudkowsky (2008) writes:

“Anthropomorphism leads people to believe that they can make predictions, given no more information than that something is an ‘intelligence’—anthropomorphism will go on generating predictions regardless, your brain automatically putting itself in the shoes of the ‘intelligence’.”

Yudkowsky’s allusions to “predictions” here, are the prediction of what an agent will do, and how it will develop in the the future.

Her point supports the idea that people build up fake notions of a systems intelligence and cognitive ability based more on what they project onto the agent, than the reality of the agent itself. This is the most common form of anthropomorphism that occurs with intelligent agents, which is to wrongly personifying a concept of will, meaning the perception that a machine has a rational behind its actions.

The training of Machine Learning algorithms often use a process where a set of “training data” is used to teach the algorithm how it should work correctly. Given the task to make an AI capable of recognizing cats, you might feed in thousands of images of cats telling the system if each image is a cat or not. Over time the system will learn what common identifiers indicate a cat being in the image, and then using its previous data can weigh up the likelihood of an brand new image being a cat. If you were to then feed in a picture of a elephant, your new algorithm is going to be useless at identifying it.

Although a simple example, it does demonstrate that Narrow AI, the kind of AI currently in existence, still has no real intellect or rational, it is only able to perform what the user perceives to be a correct action because it has been trained to do it, by getting it wrong countless time.

The will of a AI is explored further by Yudkowsky later on in her paper, where she explains the “Cheesecake Fallacy of Perception”. Its premise is that - given the size of a cheesecake you can create is dependent on your intelligence - a super-intelligence could decide to build enormous cheesecakes, the size of cities. The question is, why would it want to build them? As she says, “The vision leaps directly from capability to actuality, without considering the necessary intermediate of motive.

Take this quote given by Stephen Hawking to the BBC:

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”

Without considering any additional context, this exhibits the “Fallacy of the Giant Cheesecake” as described by Yudkowsky. We jump from the capability: AI progresses to a general or super-intelligence which has the ability to superseded humanity; to the actuality: humanity would be wiped out by the intelligence; without considering the motives: that the intelligence would desire to exponentially grow, and wouldn’t care about it’s effect on humanity.

This example can show how anthropomorphization can lead to unrealistic view on how AI will develop and thus stoke the fears surrounding it. That incorrectly projecting human traits on to a system can lead to dramatic over-assumptions about its intelligence, which can have profound impacts into what we are willing to believe about how these systems will develop.

Seeing is Believing

The role of the media in influencing society's beliefs.

“Seeing is believing” - a common idiom meaning that to actually see or witness something, as opposed to simply being told about it, allows or will allow one to believe that it is true, or has occured.

The majority of people will have most, if any, of their exposure to Artificial Intelligence through media, whether that be news stories, films or books. Although not a problem in itself, it does often create unrealistic expectations for how AI will develop, and poses an interesting notion that some fears towards AI are in-fact inherited from the media around us.

A common, and extreme, fear towards AI is the idea that “AI will rise up and take over the world.” The actual details vary drastically, some ideas suggest we will create a system that will out-evolve us, others say an AI will turn on it’s creator, or others say AI will simply kill us accidentally as a side effect of trying to carry out a task. Could it be a coincidence then, that some of the most popular movies concerning AI, “Terminator”, “I, Robot”, “Ex Machina” etc, all focus on dystopian elements of future Artificial Intelligence? Most are not as extreme as a large-scale extinctions, and few are as black and white as “people good, AI evil”, but all explore the idea that the development of AI could damage humanity and cause a significant negative impact both culturally and physically. It is worthwhile noting also, that nearly all AI portrayed in films are at general to super level intelligence, and not the at the realistic narrow AI stage we are at today.

In a story published by the New York Times in 2015, a study done by Michelle C.Pautz is explored, who investigated how watching the movies “Argo” and “Zero Dark Thirty”, affected the audiences opinions towards the government. She suggested that films can influence opinions on the topics the films explore. In her study, “20 to 25% of participants changed their opinion - and generally more favorably - on a variety of questions asked about the government.” Which matched the opinion the movies conveyed. Based on this, it wouldn’t be atall farfetched to deduce that if these movies can change audiences opinions towards government, movies about AI can change or influence people's thoughts towards AI.

This effect can be described with Transportation Theory, as explored by Melanie C. Green and Timothy C. Brock in their paper, The Role of Transportation in the Persuasiveness of Public Narratives (2000). They describe this effect as the *“extent that individuals are absorbed into a story or transported into a narrative word, may show effects of the story on their real-world beliefs.” *

It would be remiss to say all films depicting artificial intelligence do so negatively, take Wall-E for example, where the audience cannot help but feel compassion towards the protagonist, or Star Wars, with R2-D2 and C-3PO who are much beloved characters to the public. Indeed, it is also worth noting that not only do movies affect society, but society affects movies. Many movies explore current trends in thinking, picking apart the zeitgeist, and can also mirror the time in which they were produced.

The idea of the media influencing opinions can also be linked to the differences in opinion between those with experience or knowledge of technology, against those with little to no experience in the field, as the people most susceptible to external influences are likely the ones lacking in understanding.

A study published in January 2019 (US Public Report) surveyed 2,000 Americans on their feelings towards AI.

On average 41% of those asked somewhat or strongly support the development of AI, as opposed to 22% who somewhat or strongly disagreed. As for the rest, 28% neither supported nor opposed it, and 10% did not know. It gets interesting though, when we consider respondents Computer Science experience in relation to their answers. Those with experience or degrees in Computer Science were on average much more likely to support developing AI. 31% of those with no experience, somewhat to strongly supported AI development compared to the 58% of those with experience. These results could suggests that people with more knowledge and background in the field are more trusting of AI, or better think of its possible advantages. That they feel it better to continue the research and progression of advanced AI systems, which could indicate that as a whole, we should be less skeptical of fearful of the future of AI, if those who are most likely more informed believe its development to be worthwhile.

Automation

“Robots will take all our jobs.” This phrase has existed in different variations for many years. It wasn’t always “robots” explicitly. In the past it has been applied to many forms of technology. “Luddites” - a term used to describe those opposed to new technology - draws its origin from the early 19th century, where groups of English textile workers worried about how new machinery in cotton and wool mills would affect employment, took to protesting and marching, destroying these machines. Now in the present, we clearly see the benefit of these technological advancements, such as adoption of the automobile (car), which many at the time worried were a danger and menace to roads, but now the world could barely function without them. There was once a time when flint was the cutting edge of cutting edge technology, when man’s dream to fly was, just that, a dream. There is a common phrase, to “fear the unknown”, clearly this outlook can be seen in the wariness some poses to trust technological advancements, and fears towards the rise of Artificial Intelligence and automated systems are no different.

Autor (2015) states “journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor.” His point can also link back to the previous section in which it is made apparent that the media can affect our perceptions of technology, which is especially prevalent in the topic of automation. A quick search for news on “automation” gives, among others, these headlines:

Baby, Baby, Baby, Where Did Our Jobs Go? How Automation Will outsource everything to machines (CleanTechica)

Automation is Coming for American Workers, Says Mayor Pete Buttigieg (NowThisNews)

Automation threatening 25% of jobs in the US, especially the 'boring … (CNBC)

(Google News Tab, first page only. Search on 28/01/2019)

There is a clear theme here. It is not to say that these concerns and points raised are totally invalid, they are not and it would be irresponsible to not to convey this, as openly accepted by Autor: “In 1900, 41 percent of the US workforce was employed in agriculture; by 2000, that share had fallen to 2 percent.” which shows technology can still have a definite effect on the employment of large groups of people. But society adapts.

In a economics report done by PricewaterhouseCoopers (PwC), it is estimated that although 7 million existing jobs could be displaced, around 7.2 million jobs could be created, meaning net employment will not change across the UK. However, John Hawksworth (PwC’s chief economist) did admit “the distribution of jobs across sectors will shift considerably in the process.” and the report also states *“Historically, rapid technology change has often been associated with increases in wealth and income inequality, so it’s vital that government and business works together to make sure everyone benefits from the positive benefits that AI can bring” *showing that indeed the fears toward Automation, and more intelligent AI systems taking jobs are not totally unjustified.

The primary issue with automation therefore, is not necessarily iminent unemployment, but adapting to the new roles and challenges these advancements create, which if handled effectively by companies and governments can be addressed. If these risks and concerns are not addressed in the coming years however, automation due to AI will start to become a definite problem, in which there is the potential for millions of jobs to be lost. These advancements will also increase the wealth inequality between the rich minority and the working class majority, which could even lead to the need for widespread Universal Basic Income or other such systems.

Why are some fears for AI justified?

Hanlon’s Razor and Algorithmic Bias

"Never attribute to malice that which is adequately explained by stupidity.” - Robert J. Hanlon

I originally came across this phrase in relation to computers from an educational video by Tom Scott, where he used it in relation to a bug in a mobile game, which gave the developer access to a users entire email account. The developer came under heavy criticism, but was adamant it was an error, and applied a patch to solve the problem. No evidence of malpractice was found.

Here we have an example of a problem in technology, where the error, the weak link, is the human behind it.

Hanlon’s Razor can be applied to the field of Artificial Intelligence, namely if we take “malice” to mean a purposely destructive act by an AI, and take “stupidity” to mean humanity’s actions.

As mentioned previously, a common approach to machine learning is to use what is know as “training data”, in which sets of data are used to train a system in the task it is to perform. This creates issues when applying AI to the real world situations. In its simplest form: AI is only as perfect as the data it’s trained on. The problem? Humans are far from perfect.

In 2015, Amazon had to abandon an AI algorithm trained to help recruitment by filtering hundred of applications. The problem, overlooked by the team working on it at the time, was that the AI was fed with predominantly male applications, as this was the higher average. They found the algorithm started to penalise applications with the word “women” in them, this was then changed, but it became clear the AI wasn't working as desired and the project abandoned.

This isn’t the only incident of biased AI. There have been multiple reports over the last few years of racially prejudiced systems. One algorithm used to help police in several US states to predict where and when crimes might occur, was found to unfairly target certain neighborhoods with higher numbers of racial minorities. This was speculated to be because the algorithm was trained on reports from human police officers, which still can hold the same prejudice, meaning the system was effectively being trained to discriminate. The severity of this problem is demonstrated in a quote from IBM’s website, a leader in AI technology research, where they say: “Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem.” going on to say algorithmic bias “occurs in the data or in the algorithmic model. As we work to develop AI systems we can trust, it’s critical to develop and train these systems with data that is unbiased and to develop algorithms that can be easily explained.”

This point is less clear cut than the some of the other points made in this essay. It is clear that AI in of itself is unlikely to pose a threat to humanity, certainly in the present and near future, meaning again fears about AI (in isolation of itself) are unfounded. However, It does raise a legitimate risk and a major concern in the development of Artificial Intelligence, that of algorithmic bias. The fact these algorithms can be trained with implicit biases, that AI designed to improve human life and expand our abilities, might end up suffering from the exact same problems and pitfalls as their creators.

The use of AI in weaponry.

On January 9th 2017, the US Department of Defense announced a successful test, and released footage, of a “Micro-Drone swarm”. 103 Small drones deployed from fighter jets were able to act as an autonomous swarm, with “collective decision-making capabilities, adaptive formation flying, and self-healing.” This advancement has serious implications for the future of warfare, and the role of AI systems within it.

Lethal Autonomous Weapons Systems (LAWS) is the term given to technology which is capable of independently carrying out defensive, or offensive, military tasks. Current systems in deployment still rely on humans for their final decision, but the concern is that eventually these systems will be fully autonomous, with the capability to decide whether to engaging an enemy with no intervention from any human operators. The development of these independent units has already led to many protests and open letters to governments around the world calling for an end to this new arms race. One such organisation, the Campaign to Stop Killer Robots argue this technology “crosses a moral threshold. As machines, they would lack the inherently human characteristics such as compassion that are necessary to make complex ethical choices.” Many feel, and quite possibly correctly, that these systems should not be given this level of responsibility. That to do so would make it harder to get justice for victims, as laws are currently unclear on who is held accountable for the actions of an AI. As put by historian Matt Whitman on drone swarms: “If we’re just holding xbox controllers halfway around the world… on the human level it reduces one of our biggest checks in engaging in violence.” We are much more likely to get physically involved in a conflict, if those involved are out of harm's way, which links to the arguments made by the Campaign to Stop Killer Robots, who also point out such remote weapons “shift the burden of conflict even further onto the civilians” as it would still be these who suffer most with both sides engaging with this technology.

It is not just dedicated opposition groups who have made their views clear either. In 2018, many Google employees signed an open letter to Google’s CEO, Sundar Pichai, and staged a walkout, voicing their outrage at project Maven, a partnership between Google and the American government to produce AI weaponry. They demanded that the company should withdraw from the contract, and then urged it to publicly state it would never help develop AI weaponry, pointing out their actions seem to contradict a famous section of the companies code of conduct, “don't be evil.” This caused Google to decide to withdraw from the contract when it expires this year.

These protests towards the use of Artificial Intelligence in weaponry, especially by those working in the industry, highlight the severity of the problem, and seem to suggest that as of now, there is still no adequate regulations and agreements to ensure these systems will be safe and moderated as development continues into the future.

Why we should embrace the future of AI.

Much of this essay has been focused upon the potential risks that AI raises, those that are already happening, and the common misconceptions which unfairly shape people's perceptions of AI. What has not been covered, are the true benefits that Artificial Intelligence could provide to society and humanity as a whole. It is also easy by focusing so much on the future, to overlook the fact that AI is already present in many technologies today, and its positive effect is felt all over the world, everyday.

Consumer Market

A common and popular application in the present, is the development of personal assistants and natural language processing, evident in products such as Google Home and Alexa, but also incorporated into the rest of our technology, like Siri, Cortana and Ok Google. These systems have already begun to change how we interact with our technology, allowing for a more natural and personal connection with our devices.

A less obvious application of machine learning and Artificial Intelligence techniques in today's consumer products are those of recommendation algorithms. Amazon, Netflix, Google and many more use AI to personalise consumers feeds with products thought to be more appealing to them, using intelligence to better enhance user experience and the usefulness of their products.

However, Rijsdijk et al,(2007) conducted tests into consumer satisfaction on intelligent products, stating: *“Our results show that consumers do not appreciate intelligent products for their intelligence itself, but because of the relative advantage and compatibility that they deliver.” *which although not contradicting the idea that Artificial Intelligence in consumer products is a positive, does show that it is not AI which is directly having the impact, but its effect, which is to be expected. People do not value their phones simply because they are computers, but for what they enable them to do. So while based off this study we see consumers do not explicitly value Artificial Intelligence in their products, the positive effect of AI on these products is clearly apparent.

Medical Diagnosis and Treatment

Artificial intelligence has started to revolutionise the field of medicine, especially in diagnosis. Using image recognition technology, systems are now able to predict cancer growth upto 20 years before human scientist could. As quoted by the BBC, from Caranagna et al (2018) *"This new approach using AI could allow treatment to be personalised in a more detailed way and at an earlier stage than is currently possible, tailoring it to the characteristics of each individual tumour and to predictions of what that tumour will look like in the future." *

AI is also being applied to help those who have been partially or completely paralyzed. Using special sensors implanted in the brain it is now possible for Artificial Intelligence to allow for patients to move robotic arms, or indeed their own, by measuring electrical signals from the brain. This involves asking the patient to think about moving their arms in different way, AI software then proceeds to learn what electrical signals from the brain represent these movements, and then once the system has gathered enough data, it can scan the brain and when it detects this signal, can either move a robotic arm, or then send impulses to electrodes on the patient's arm which trigger the muscles to act. This application of AI is at the cutting edge of development, and shows the potential for this technology in the next few years to not only progress the field of computers and technology, but enhance the human body as well.

Keeping humans out of danger

A common use of robots already, is to replace particularly dangerous or hazardous jobs people would have to have previously done. The most obvious application is in warfare, which although already discussed as a prime area of concern in the development of AI, does potentially lead to less human fatality, or it could be argued, simple shifts this to the more vulnerable.

One real positive application of AI in dangerous situations however, is in disaster response. It has been hypothesised that by using drone-swarms or nano-robots it is possible to deploy these at a disaster zone and have them work autonomously, scanning the area for potential humans remaining, co-ordinating their actions. This could be especially useful in post-earthquake zones where buildings are liable to collapse, or during fires, as this reduces the risk rescuers are taking, which could therefore end up preserving more human lives as a consequence.

Conclusion

As shown throughout this essay, there are both positive and negative outlooks on Artificial Intelligence. We see its use in everyday life, like in cancer diagnosis and recommendation algorithms, and we can see the promises it offers in pushing human technology further than it has ever been. However, the harm AI could cause has also been explored, both in the workplace, with the rise of Automation, and on the battlefield with the continuing development of LAWS and other weaponry with the aim of automating warfare and defense. We have also seen, that there is still ambiguity around the field of Artificial Intelligence itself, and that on average there are many misconceptions, misunderstandings and plain mistruths about what Artificial Intelligence is, what it can and can’t do, and how it could affect humanity in the years to come.

It should be apparent, that scaremongering that AI will “rise up and kill us all.” is almost definitely incorrect, but it would also be wrong to dismiss all threats that AI could cause. To underestimate potential negative impacts AI could have is far more dangerous, and could lead to irreversible consequences. Take the warning of PwC on automation. They outline that AI automation will radically change the job market, and believe based on their study that it won't cause the mass job loss people panic about. However, they are quick to point out this depends on how companies and governments respond: “it’s vital that government and business works together to make sure everyone benefits from the positive benefits that AI can bring.”

This is also the case with the Campaign to Stop Killer Robots. They propose we must “Retain meaningful human control over targeting and attack decisions by prohibiting development, production, and use of fully autonomous weapons. Legislate the ban through national laws and by international treaty.”

Here they are not campaigning for the total eradication of AI in weaponry, as they realise it is potentially futile to protest for a complete ban, but they wish to ban “fully” autonomous weaponry. They believe there still should be human intervention (“meaningful human control”), which means humans overseeing this technology as it develops and during its use, a view which is also important in the emergence of algorithmic bias, that a lack of oversight and regulation of these systems is likely the most severe danger of Artificial Intelligence going forward,.

In a report given to the European Council, Häggström (2017) makes the argument for Rational Optimism, towards the AI future. This is having a “epistemically well-calibrated view of the future and its uncertainties, to accept that the future is not written in stone, and to act upon the working assumption that the chances for a good future may depend on what actions we take today.”

This I think, is the best and most productive way of approaching the development of Artificial Intelligence systems going forward. There are a multitude of potential benefits that AI can offer to society that could change not only how we live and work, but also how we fundamentally think about the world around us. There are also many dangers, which could lead to the unemployment of millions or, worst case scenario, threaten the very continuation of the human race.

So, should we fear AI? No we should not. Not if we are willing to collectively work together now to mitigate the risks AI could pose in the future. Maybe a more suitable question would be: Should we fear humanity?

Bibliography

O, Häggström (2017) Remarks on Artificial Intelligence and Rational Optimism

http://www.europarl.europa.eu/RegData/etudes/IDAN/2018/614547/EPRS_IDA(2018)614547_EN.pdf

J, McCarthy (1998) What is Artificial Intelligence?, Stanford University CA 94305

http://cogprints.org/412/2/whatisai.ps

T, Jajal (2018) Distinguishing between Narrow AI, General AI and Super AI

https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22

N, Bostrom (2009) Superintelligence Answer to the 2009 EDGE QUESTION: “WHAT WILL CHANGE EVERYTHING?”

*https://nickbostrom.com/views/superintelligence.pdf *

E, Yudkowsky (2008) Artificial Intelligence as a Positive and Negative Factor in Global Risk, MIRI

https://intelligence.org/files/AIPosNegFactor.pdf

C, Paultz (2015) Argo and Zero Dark Thirty: Film, Government, and Audiences

https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/argo-and-zero-dark-thirty-film-government-and-audiences/889B13ED0B53B2DF7C09372D4ACCECE5

C, Green and T, Brock. The Role of Transportation in the Persuasiveness of Public Narratives Ohio State University

http://www.communicationcache.com/uploads/1/0/8/8/10887248/the_role_of_transportation_in_the_persuasiveness_of_public_narratives.pdf

D, Autor (2015) Why Are There Still So Many Jobs? The History and Future of Workplace Automation, Journal of Economic Perspectives—Volume 29, Number 3—Summer 2015—Pages 3–30

https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.29.3.3

Rijsdijk et al, (2007) Product intelligence: its conceptualization, measurement and impact on consumer satisfaction.

https://link.springer.com/content/pdf/10.1007%2Fs11747-007-0040-6.pdf

Caranagna et al (2018) Detecting repeated cancer evolution from multi-region tumor sequencing data.

https://www.nature.com/articles/s41592-018-0108-x.epdf?referrer_access_token=HqDf6j8ISzz1EGOcqIF6TdRgN0jAjWel9jnR3ZoTv0M_3QUFlfprLxJkQR6pLglZwg2rCJFKfkZEvhTVVqmaYfFipaaH88bYwB9WHH6bp5gCtCC19RhV4wEQKYRjdqoqXEikIweWJhEuL1xIhEz_0eER7RDV92F9i4pQ1YzU1n-KdCqYzLyasH1SH1sgE4KMK9mn3JuzhEK98GyXAhiLcLDk0t95S-ezSCEPjUVZayA%3D&tracking_referrer=www.bbc.co.uk

US Public Report: https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/general-attitudes-toward-ai.html

PwC Report: https://www.pwc.co.uk/press-room/press-releases/AI-will-create-as-many-jobs-as-it-displaces-by-boosting-economic-growth.html

Stephen Hawking to BBC: https://www.bbc.co.uk/news/technology-30290540

IBM on AI bias: https://www.research.ibm.com/5-in-5/ai-and-bias/

New york times on C, Paultz https://op-talk.blogs.nytimes.com/2015/02/04/how-movies-can-change-our-minds/

Tom Scott (Hanlon’s Razor)

https://www.youtube.com/watch?v=cDZjm4f9CEo

Campaign to stop killer robots: https://www.stopkillerrobots.org/learn/

M, Whitman on AI swarm: https://www.nodumbquestions.fm/listen/2017/2/10/ndq003-drones-and-bears (12:00 - 12:53)