Illustration by Nora Wildberg

“I’m a person who believes that ethics is a thing that should be incorporated into every element of what you do, everything you’re ever working on: your summer work, your academic research, your school work. You should think of the ethics of what you’re doing,” says Jasmine Peled, a senior in the Computer Science (COS) department. She’s taken it upon herself to address an old, albeit increasingly important issue in her senior thesis in the field of computer science: where does ethics fit into engineering?

We’re sitting at the tall, round tables in Frist while we discuss Jasmine’s thesis. She starts off the conversation by preemptively apologizing for talking too loud or too much, or some mixture of both. She’s come to our meeting without any bag or items; presumably our discussion is one stop on a long list of assorted items to do that day; from meeting with her adviser to go over her finalized outline for her thesis, to meeting with department chairs about logistics regarding her enormous project, to somehow finding time somewhere in the middle to relax or spend time with friends.

“The big picture idea behind my thesis is that the computer science department here doesn’t do the best job of teaching ethics within classes, and I wanted to fix that.”

Engineers have long been concerned with the incorporation of ethics into practical matters. In the United States, the Order of the Engineer (an organization designed to standardize engineering education) was founded in 1970, mirroring practices of a Canadian engineering ethics society, founded in 1925. Both organizations maintain a long-standing tradition of holding a ring ceremony for engineers graduating from accredited universities. Legend has it that the ring bestowed upon engineers entering into the workforce is crafted using steel from the first Quebec Bridge, which, in the first major contemporary engineering crisis, collapsed due to poor planning on behalf of the engineers responsible for overseeing the project. The ring, according to lore, is a bodily reminder for all engineers that they must hold themselves to the highest standards in any and all engineering choices, to always do what is morally right in favor of what is practically expedient.

It’s not unusual for professions whose primary work services society in some way or another to concern themselves with ethical considerations. All doctors, for example, are required to take a Hippocratic oath, a promise to intentionally do no harm, before being granted license to practice medicine. These moralistic pledges concretize an interpersonal contract that, when not considered explicitly, runs the risk of being taken for granted. Professionals, particularly those technically inclined, and particularly those who have not been exposed to extensive ethical training, may inadvertently cause irreparable harm in their daily work, knowingly or not. Before unleashing these professionals into the world, they must be conscious of the implications of all their actions, or at the very least, those related to their profession.

In most cases, ethical oaths revolve around careers that remain relatively static. While new diseases are uncovered and new treatment methods are constantly being employed, the general schematic that doctors follow in treating patients has remained, and likely will remain, fairly consistent with preceding iterations. In other words, the ethical considerations that newly minted doctors bring to their practices aren’t significantly different from those of the doctors who trained them, and those that trained their teachers, and so on. Broadly speaking, unlike engineering, medicine is not prone to seismic shifts within the profession itself.

Computer science has only recently developed as a viable field underneath the engineering umbrella, and its development raises a vast new swath of technical, practical, and perhaps most importantly, moral issues. Unlike traditional engineering professions which are primarily concerned with peoples’ physical safety (i.e., how to ensure that the bridges people use are not destined to collapse, the airplanes that people ride are not destined to crash, etc.), computer science, and the ethical questions surrounding its practice, straddle the physical and psychological conditions of the individual. Concerns with data integrity and security are equally as prevalent as concerns with autonomous systems (like self-driving cars) being deployed in public spaces.

In many cases, moral issues undergirding practical computer science balloon in front of the public eye before they can be adequately addressed by computer scientists themselves. Only a few weeks ago, news broke about a data science firm, Cambridge Analytica, exploiting Facebook’s lax security standards to obtain personal data from platform users and set the stage for manipulation in the 2016 election. This story, and its massive international media presence, naturally brought to light pressing questions concerning the security of personal data: to what extent is Facebook responsible for protecting data used for a social platform? What do users of technical social platforms expect from providers?

But faulty (even unintentionally faulty) software has been causing harm, both social and physical, for decades. Between 1985 and 1987, the Therac-25, a radiation therapy machine, administered massive overdoses of radiation to cancer patients, causing burns and discomfort to hundreds of patients. Shortly after the device was discontinued, it was discovered that a software bug that messed with the machine’s ability to handle multiple instructions at once had gone undetected by review (and later, it was determined that no code review ever actually took place). Additionally, the programmers responsible for the software failed to program the machine to report informative error machines to the specialist operating the machine, which resulted in several cases of specialists being completely unaware that anything was even awry in the treatment sessions.

Between then and now, the issue of ethical engineering and preventing harm caused by lazily constructed software has mainly been approached as one of de-facto quality control. Intense research and work has gone into devising efficient means of automatically verifying software, a process in which one computer program is “verified,” or checked for bugs, by another piece of software. While this begins to scratch the surface of the problem by preventing already faulty software from reaching the market, it leaves open what prefigures the creation of such software: how to encourage software engineers to make ethical choices before writing software; to think beyond the scope of the specification that’s before them, and to think critically about how design choices they make impact those who interact with their software, whether or not the choices they make cause irreparable harm.

Encouraging computer scientists to engage with these questions before they manifest in massive, harmful technical disasters is Jasmine’s primary goal in her work. This starts, she believes, with education: “The idea is that if you’re alerted in your classes about the ethical issues you may encounter in the field, you probably will run into ethical dilemmas on the larger scale, but you’ll be equipped to deal with them, and you’ll actually think about the fact that you should deal with them.”

Starting over the summer and at the beginning of this year, Jasmine began to research the topic of ethics in computer science. She was primarily interested in approaching it from a theoretical standpoint, only to find a road block quite quickly.

“There really is a very small amount of theoretical work dealing with ethics in computer science; it’s kind of nonexistent.”

After confronting meager resources for researching the topic, Jasmine began to think critically about how to approach the same issue from a practical standpoint. After recruiting an adviser, Prof. Nick Feamster, and considering his proposal to help him incorporate ethics into one of his classes, Jasmine began to formulate a schematic for the project that would become her thesis. She resolved to explore existing experiments at incorporating ethics in COS curricula, and to begin forging a path for the Princeton COS department to embark on reformulating some of its courses’ curricula to be more cognizant of ethical issues in the field.

Jasmine’s passion for her project crops up in minor moments in our conversation: she’s been to several conferences intended for computer science faculty to discuss the implications and means of incorporating ethics into core curricula; the breadth of her reach in obtaining personal accounts of students in a wide range of computer science programs across the US. When she speaks of her work, a steady conviction rises through her words; this is not a thesis in traditional terms—a monolithic project that, after a few months, one tends to lose interest in. This is a project with immense traction, and a force behind it driving it forward despite the fact that it mostly trudges through murky, unmarked territory.

Her work fits into the beginnings of a watershed movement taking over institutions that teach computer science. Recently, Harvard, MIT and Stanford have begun teaching stand-alone courses on the ethics of computer science. Harvard has a track within the computer science major that is entirely dedicated to the ethical and social considerations of the field. Increasingly, institutional leaders in the field are concerning themselves with how to teach the discipline in a multi-faceted way, how to address the plain reality that the field is increasingly concerned with the private and public lives of more and more people in the world, and that with this broad coverage over and influence in the human condition comes an extremely large set of ethical concerns that must not be swept under the rug.

In the quest to bring ethics into the education of its computer scientists, Princeton has a lot of ground to cover: “Lacking ethics education is a pretty systemic issue,” Jasmine says. “There are schools (like Harvard, Stanford and MIT), that we are miles behind.” Ethics and public policy is not a track within the COS major at Princeton. Certificate programs, like Technology and Society, provide an opportunity to enrich a COS (or any other major’s) education with topics relating technical matter to social issues, but the requirements for the certificate do not stress or enforce standards that encourage students to engage with pertinent ethical concerns.

Of course, the meager presence of ethics in the computer science curriculum isn’t intentional so much as it is generational: most professors writing course syllabi weren’t taught computer science through an ethical lens when they were students. The natural impulse to imbue curricula with a moral spin is lacking amongst a great percentage of the COS faculty both at Princeton and in computer science departments at other schools. On top of this, stigma regarding the topic of ethics in computer science itself chokes up efforts to engage with the material candidly: “Somebody at Harvard who I was talking to recently said that […] the ethics and policy branch of the computer science major is the joke one,” Jasmine reflects. “It’s the one that people who aren’t good at CS do or the one that a lot of women do; it’s the feminine major.” Jasmine relays this conversation to me, heavily punctuating each clause with air quotes.

One way to push back on this stigma is to side-step traditional methods for teaching ethics: instead of teaching stand-alone courses, as many other institutions have begun to do, Jasmine advocates for reformulating existing technical curricula to make room for the discussion of ethical questions. “If you have a standalone course, it becomes a joke, easy-A course that everyone takes because they don’t want to do real CS,” Jasmine says, air-quoting ‘real.’ Instead, Jasmine wants professors to encourage students to consider ethical dilemmas when they crop up naturally: “One of the reasons to embed ethics within existing courses is that if, while you’re doing networks, you’re also thinking about the ethics of networks, it’s a little bit less of this separate field.” In other words, ethics comingles with computer science, rather than evolving in parallel. Computer scientists are trained from day one to consider the ethics of their practice as a part of the practice itself, rather than considering ethical questions to be secondary to the technical tasks at hand.

Jasmine has been working steadily on two large aspects of her work, one of which involves drafting updated curricula for professors interested in implementing a more ethics-heavy spin on his/her course subject; and another has been assessing the pedagogical methods for imbuing ethics into COS courses. These tasks involve a mixture of reformulating current COS classes from the ground up, while in parallel conducting workshops aimed at teasing out the most salient method of teaching ethics to budding computer scientists.

But the question remains: how much of this will stick? A great deal of the recent international interest in ethics has been wrapped up in the swirl of news cycles, gaining traction on the curlicue of a headline, often dropping off after people have moved on to the next news item. What will happen when Jasmine leaves, when her work, at least for a short period of time, is put on pause?

“I want to make sure that this is able to go on beyond me,” Jasmine says, holding eye contact that intensifies as she goes on. “My hope is that every class or almost every class would have ethics embedded to its core in the way that the course is taught. One of the reasons that that’s hard to do is that faculty changes, and lecturers change, and there is some amount of turnover in terms of who’s teaching the class, and often when professors come in, they’ll take assignments they used to give with them.”

While the uncertainty of the tenability of this project is certainly present, there remains a persistence to find the silver lining. What matters right now is making sure that at least something is being done. Permanence is an important question, but it’s secondary to the initial action. What matters is that while Jasmine may be one of the first to address the topic of ethics in CS in a practical way (i.e., thinking rigorously about how to change curricula to accommodate it), she represents an undercurrent within the community that signals that the movement will remain powerful and maintain its momentum long after she’s left Princeton.

“Divorced from me, that movement is happening on its own: professors are self-motivated to care and younger people are self-motivated to care,” she says, glancing down for a bit. “I also think that recent world developments have pushed a lot of students to care more about these things in a way that people weren’t as concerned about before. People are thinking about these things as being important.”

The current within the community is strong, albeit one that would benefit from consistent investment. A powerful facet of most ethical codes is their ritualistic ceremonies—the donation of the physical ring to engineers in the Order of the Ring, the recitation of the Hippocratic oath—their insistence that questions of moral magnitude and ethical conduct are equally as important as the technical challenges that face those entering the field. Overturning decades and decades of conduct of computer scientists is a Herculean effort, but it’s one that is perhaps long overdue.

Despite the fact that sinuous news cycles and crisis-prone economics drive much of the interest in ethics in CS, there’s hope that remaining consistent in teaching what’s important to consider in an ethical computer science career will be enough in the long run. As Jasmine puts it, “I think that interest in ethics ebbs and flows a lot, just based on the world, but I think that focusing on incorporating ethics into the undergraduate education is something that can help nudge that ebb and flow in the right direction.”