Notes to Computing and Moral Responsibility
1. The term technological artifacts here refers to the (socially) constructed material or physical objects such as computers, cars and refrigerators, that human beings create and use to achieve a particular purpose or goal. This conception of technological artifacts is often used in social and historical studies to distinguish artifacts from natural objects and other socially constructed artifacts like regulatory laws (Hughes 1982; Bijker et al. 1987). For more on the concept of artifacts see the entry Artifacts.
2. According to Bijker et al. interpretive flexibility of technological artifacts means that “there is flexibility in how people think of, or interpret, artefacts” and “that there is flexibility in how artefacts are designed” (Bijker et al. 1995, p. 40). That is, different ‘relevant social groups’ have varying criteria for judging what makes a design superior or even workable, depending on, often competing, goals and interests, as well as on distinct ideas about what a particular artifact should do.
3. A long running philosophical debate about Artificial Intelligence is centered on the thesis that processes of the mind could be generated by computational structures (McCorduck 1979). Critics of AI have taken exception to the suggestion that the human mind and computers could be thought of as governed by the same general principles (Graubard 1988). They have argued against the presupposition that knowledge and intelligence could be captured in computational structures and mathematical or logical models. These critics have provided a range of proposed inherent properties or abilities that humans have and machines lack, such as emotion, common sense and intentionality. One of these critics, Searle, was the first to use the term ‘strong AI’ to refer to the philosophical position that a computer with the right kind of programs can literally be a mind that is able to understand and have other cognitive states (Searle 1980). He distinguished this kind of research from, what he called, ‘weak AI’. Weak AI makes no claims about computers being minds and merely argues that computers are useful for testing particular explanations of processes of the mind because they simulate these processes. Contrary to strong AI, this position does not claim, according to Searle, that computers literally are the explanation (see also the entry on the Chinese Room argument).