{"id":3476,"date":"2017-12-04T17:27:43","date_gmt":"2017-12-04T16:27:43","guid":{"rendered":"http:\/\/blogit.itu.dk\/ethos\/?p=3476"},"modified":"2018-01-04T16:00:10","modified_gmt":"2018-01-04T15:00:10","slug":"blog-post-by-stefania","status":"publish","type":"post","link":"https:\/\/blogit.itu.dk\/ethoslab\/2017\/12\/04\/blog-post-by-stefania\/","title":{"rendered":"Blog post by Stefania"},"content":{"rendered":"<h2>The dead parrot and the hole in the paper sky<\/h2>\n<h3><span style=\"font-weight: 400\">Messy thoughts on humans, machines, language and trust.<\/span><\/h3>\n<p><em>A blog post written by Stefania Santagati, a DIM student at the IT University. Stefania was a Junior Researcher in ETHOS Lab in the spring semester 2017.<\/em><\/p>\n<h4><span style=\"font-weight: 400\">ALIV<\/span><\/h4>\n<p style=\"text-align: right\"><em><span style=\"font-weight: 400\">Much later, he would conclude that nothing was real except chance. But that was much later. In the beginning, there was simply the event and its consequences.<\/span><\/em><\/p>\n<p style=\"text-align: right\"><em><span style=\"font-weight: 400\">Whether it might have turned out differently is not the question. The question is the story itself, and whether or not it means something is not for the story to tell.<\/span><\/em><\/p>\n<p style=\"text-align: right\"><span style=\"font-weight: 400\">Paul Auster, <\/span><i><span style=\"font-weight: 400\">City of Glass<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400\">My recent research started the day ALIV met my friends. ALIV is a small text adventure simulating a super-intelligent AI. My friends, its playtesters.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Monika and I developed ALIV as a quick prototype for a videogame idea we had in mind: a game where the interface is lying to you. We placed our concept\u2019s setting as a futuristic spaceship, in which the player character, John Doe, finds himself after being awoken from cryosleep. He is appointed current captain and needs to cooperate with the on-board AI, ALIV. <\/span><\/p>\n<p><span style=\"font-weight: 400\">He is told to deliver settlers to the planet Oztralia, but in reality the mission is to transport political opponents there for hard labour, and to exterminate all life on the entire planet whenever a new batch of \u201csettlers\u201d arrives. The game\u2019s main interaction is the communication with the board computer ALIV via text input. <\/span><\/p>\n<p><span style=\"font-weight: 400\">There would be much to say about our design process itself &#8211; why we decided not to apply the human concept of \u201clying\u201d to our AI and only have it witholding information, and later introduced a repressive government as the deceiving maker of ALIV &#8211; but that\u2019s another story.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Eventually ALIV was clumsy, often revealing its ineptitude when collapsing against the smallest typo, and the game frustrating &#8211; requiring the player to decipher subtle hints along the way, many of which, in hindsight, must have been obscure for everybody except us. Despite that, some made it to the end. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Now that was the thing, the moment where the original premise bit back. The player who had spent his game time blindly following AI\u2019s instructions would find himself faced with two presumably horrifying options: cooperate or die. ALIV would reveal its true face, and the player, abruptly stripped of the taste of a probable victory, would often stare blankly at the screen for a few seconds. Some of them, sometimes, would turn to us and say, \u201cYou betrayed me\u201d.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The outcome of our small experiment was influenced by many factors and relied on some conventions. The anecdote in itself has one obvious morale &#8211; and another, which, after months of research, is still not entirely clear to me. <\/span><\/p>\n<p><span style=\"font-weight: 400\">First, it seems to me that the trust instinctively granted to ALIV was partially due to its inherent authority as a videogame interface. Interfaces are usually places where designers share, and players gather knowledge about the specific means by which the artifact is intended to be used; much like an instruction manual for Ikea furniture, it is seldom a place where to apply one\u2019s critical thinking. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Secondly, it was far too obvious that all of ALIV\u2019s interactions were scripted, goofily responding to simple keyword search restricted by finite-state machines. Our brute force approach to a complex reality &#8211; one where different players would express their commands in unpredictable ways &#8211; was hilariously brittle. It\u2019s easy to perceive the designer behind the design when the fictional pact &#8211; the willingness to suspend one&#8217;s critical faculties and believe something surreal &#8211; is repeatedly broken. ALIV\u2019s voice was, clearly, our voice. ALIV was our creature, and we were literally standing within a stone\u2019s throw from it. Therefore ALIV had no agency, and we were the ones to hold accountable for the betrayal.<\/span><\/p>\n<p><span style=\"font-weight: 400\">But observing players struggling on where to place their trust made me think of something else, too. What happens with commercial applications of conversational AI, the ones that don\u2019t come with easily blameable developers attached? Even state of the art natural language processing &#8211; think Alexa, or the famous Sophia &#8211; cannot go too far from echoing their creators\u2019 voice and intentions.<\/span><\/p>\n<p><span style=\"font-weight: 400\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" data-attachment-id=\"3495\" data-permalink=\"https:\/\/blogit.itu.dk\/ethoslab\/2017\/12\/04\/blog-post-by-stefania\/1-2\/\" data-orig-file=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?fit=789%2C467&amp;ssl=1\" data-orig-size=\"789,467\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"1\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?fit=300%2C178&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?fit=789%2C467&amp;ssl=1\" class=\" wp-image-3495 alignleft\" src=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethos\/wp-content\/uploads\/sites\/14\/2017\/12\/1-300x178.png?resize=438%2C260&#038;ssl=1\" alt=\"\" width=\"438\" height=\"260\" srcset=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?resize=300%2C178&amp;ssl=1 300w, https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?resize=150%2C89&amp;ssl=1 150w, https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?resize=768%2C455&amp;ssl=1 768w, https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?resize=600%2C355&amp;ssl=1 600w, https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/1.png?w=789&amp;ssl=1 789w\" sizes=\"auto, (max-width: 438px) 100vw, 438px\" \/>When our players addressed the ghost in the machine &#8211; the entity with enough judgement to intentionally deceive them &#8211; they easily reckoned it must have been us. But how does that work with a real, complex, layered system of artificial intelligence resembling humanness? Does it have agency? Can it be trusted? Who should be held accountable for it? What is it, anyway?<\/span><\/p>\n<h4><span style=\"font-weight: 400\">Pining for the fjords<\/span><\/h4>\n<p><span style=\"font-weight: 400\">There is a famous <\/span><a href=\"https:\/\/www.google.dk\/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=video&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0ahUKEwiviaOi5u_XAhVrJ5oKHZgkCf8QtwIIKDAA&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D4vuW6tQ0218&amp;usg=AOvVaw3IzfiwFB7WFbDDwPiYgOmB\"><span style=\"font-weight: 400\">Monty Python\u2019s sketch<\/span><\/a><span style=\"font-weight: 400\"> that comes to my mind when I think about this story. The sketch portrays a conflict between a disgruntled customer (played by Cleese) and a shopkeeper (Michael Palin), who argue whether or not a recently-purchased &#8220;Norwegian Blue&#8221; parrot is dead. \u201cWhen I purchased it, not half an hour ago, you assured me that its total lack of movement was due to it being tired and shagged out following a prolonged squawk\u201d, complains Cleese. \u201cHe\u2019s probably pining for the fjords\u201d, the shopkeeper replies. The pun became so popular that is now simply used as an euphemism for something that is dead but is pretended to be still alive.<\/span><\/p>\n<p><span style=\"font-weight: 400\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" data-attachment-id=\"3497\" data-permalink=\"https:\/\/blogit.itu.dk\/ethoslab\/2017\/12\/04\/blog-post-by-stefania\/2-2\/\" data-orig-file=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/2.png?fit=389%2C300&amp;ssl=1\" data-orig-size=\"389,300\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"2\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/2.png?fit=300%2C231&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/2.png?fit=389%2C300&amp;ssl=1\" class=\"size-medium wp-image-3497 alignleft\" src=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethos\/wp-content\/uploads\/sites\/14\/2017\/12\/2-300x231.png?resize=300%2C231&#038;ssl=1\" alt=\"\" width=\"300\" height=\"231\" srcset=\"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/2.png?resize=300%2C231&amp;ssl=1 300w, https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/2.png?resize=150%2C116&amp;ssl=1 150w, https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/2.png?w=389&amp;ssl=1 389w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>At times, trying to design ALIV\u2019s answers we felt like Monty Python\u2019s clerk: absurdly, pointlessly trying to give a resemblance of life to an inanimate object. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Design practices for artificial agents often play on this concealment of the designer behind the design: \u201cthe new idea is that the intelligibility of artifacts is not just a matter of the availability to the user of the <\/span><i><span style=\"font-weight: 400\">designer\u2019s <\/span><\/i><span style=\"font-weight: 400\">intentions for the artifact, but of the intentions of the <\/span><i><span style=\"font-weight: 400\">artifact<\/span><\/i><span style=\"font-weight: 400\"> itself\u201d (Suchman, 1987). See also Franchi and G\u00fczeldere (1995) slightly outdated, but still relevant account on the advancements in the field of AI: \u201cin chess playing programs [&#8230;] the brute force is veiled behind a form of behavior typically associated with something dear to our hearts: the intricate game of chess, where \u2018minds clash\u2019. [&#8230;] the machine intelligence involved in chess playing owes more to the \u2018eye of the beholder\u2019 than to any actual intellectual capacity\u201d. Today, despite having mastered games much more complicated than chess, \u201cArtificial Intelligence\u201d is still a contradiction-in-terms, a tech industry self-aggrandizing misnomer. <\/span><\/p>\n<p><span style=\"font-weight: 400\">This form of concealment, or projection, might be the natural derivative of the idea that first sparkled the development of AI itself: that if a machine can imitate human behavior convincingly enough, it cannot be distinguished from a human respondent (Turing, 1950). <\/span><\/p>\n<h4><span style=\"font-weight: 400\">The ELIZA effect <\/span><\/h4>\n<p><span style=\"font-weight: 400\">Coming close to passing Turing\u2019s test in the mid-60s was Joseph Weizenbaum\u2019s groundbreaking experiment ELIZA, a program written while Mr. Weizenbaum was a professor at MIT and named after Eliza Doolittle, who learned proper English in \u201cPygmalion\u201d and \u201cMy Fair Lady\u201d. The program made it possible for a person typing in plain English at a computer terminal to interact with a machine in a semblance of a normal conversation, and is therefore as the first \u201cchatterbot\u201d, or conversational agent.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Meant as a parody of a Rogerian psychotherapist, and indeed as a proof of the superficiality of the communication between humans and machines, ELIZA went above and beyond its initial purpose, spurring enthusiastic reactions from both practicing psychiatrists and people involved in the experiment, who quickly \u201cvery deeply&#8230;became emotionally involved with the computer\u201d and \u201cunequivocally anthropomorphized it\u201d (Weizenbaum, 1976). <\/span><span style=\"font-weight: 400\">Some of his students exhibited strong emotional connections to the program; his secretary asked him to be left alone when talking to ELIZA.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Weizenbaum was deeply troubled by what he discovered during his experiments with ELIZA. In 1976, he sketched out a humanist critique of computer technology in his book <\/span><i><span style=\"font-weight: 400\">Computer Power and Human Reason: From Judgment to Calculation<\/span><\/i><span style=\"font-weight: 400\">. The book did not argue against the possibility of artificial intelligence, but was a passionate criticism of systems that substituted automated decision-making for the human mind, and an invitation to carefully consider \u201cthe proper place of computers in the social order\u201d.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Social scientist Sherry Turkle, the director of MIT\u2019s Initiative on Technology and Self and one of Weizenbaum\u2019s former colleagues, considers ELIZA and its ilk \u201crelational artifacts\u201d: machines that use simple tricks like mirroring speech or holding eye contact to appeal to our emotions and trigger a sense of social engagement. <\/span><\/p>\n<p><span style=\"font-weight: 400\">After working with children\u2019s perception of advanced humanoid robots such as Cog and Kismet, she noted: \u201cThe relational artifacts of the past decade, specifically designed to make people feel understood, are more sophisticated interfaces, but they are still parlor tricks. [&#8230;] If our experience with [these robots] is based on a fundamentally deceitful interchange\u2014[their] ability to persuade us that they know of and care about our existence\u2014can it be good for us?\u201d (Turkle, 2006)<\/span><\/p>\n<h4><span style=\"font-weight: 400\">Camouflage<\/span><\/h4>\n<p><span style=\"font-weight: 400\">Weizenbaum had unexpectedly discovered the tendency to unconsciously assume computer behaviors are analogous to human behaviors\u2014a phenomenon now known as the \u201cELIZA effect.\u201d<\/span> <span style=\"font-weight: 400\">In the interaction with conversational agents, this also builds on our view of linguistic actions as inherently human. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Natural language it\u2019s the medium of communication among members of our species, and our species only. Anthropomorphisation, already observed in the interaction with infinitely less complex systems such as household appliances (Taylor, 2009), can only be strengthened by the use of intentional, moral vocabulary when interacting with conversational systems. <\/span><\/p>\n<p><span style=\"font-weight: 400\">But as the parrot, even on its better days ELIZA (and ALIV, and the others to various degrees) could only but repeat the words it had been exposed to, giving a pale resemblance of intelligence. To them, words are memory allocations, faint electrical pulses through silicon circuits, whose relational distances can be measured through vectors operations. Signs systems converted to other signs system, so that a conversation can be held, on our terms.<\/span><\/p>\n<p><span style=\"font-weight: 400\">There\u2019s nothing wrong with that but it might be beneficial, if the aim of AI research is still to gain insights on the nature of intelligence, to redefine the terms of the representation. Maybe for example we could think of AI more of a <\/span><i><span style=\"font-weight: 400\">simulacrum<\/span><\/i><span style=\"font-weight: 400\">, than a copy or imitation:<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u201cThe terms copy and model bind us to the world of representation and objective (re)production. A copy, no matter how many times removed, authentic or fake, is defined by the presence or absence of internal, essential relations of resemblance to a model. The simulacrum, on the other hand, bears only an external and deceptive resemblance to a putative model. The process of its production, its inner dynamism, is entirely different from that of its supposed model; its resemblance to it is merely a surface effect, an illusion.\u201d (Massumi, 1987)<\/span><\/p>\n<p><span style=\"font-weight: 400\">Historically, the ontological divide between the human and the non-human has been secured by grounding human personhood in the use of language to create meaning. But already within second order systems theory meaning is disarticulated from language and stems rather from the preference of human and nonhuman (even non-biological) systems for reducing complexity or &#8220;noise&#8221; which autopoietic systems must do if they are to survive. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Human beings are just one of many autopoietic systems sharing their environment with a wide range of non-human animals, each &#8220;bringing forth a world&#8221; in a meaningful, even if not human, way. <\/span><\/p>\n<p><span style=\"font-weight: 400\">In this view, traditional humanism is no longer adequate to understand the human\u2019s entangled, complex relations with animals, the environment, and technology.<\/span><\/p>\n<h4><span style=\"font-weight: 400\">Life under a torn paper sky<\/span><\/h4>\n<p><span style=\"font-weight: 400\">\u201c<em>Lucky marionettes, I sighed, over whose wooden heads the false sky has no holes! No anguish or perplexity, no hesitations, obstacles, shadows, pity\u2014nothing! And they can go right on with their play and enjoy it.<\/em>\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400\">Luigi Pirandello, <\/span><i><span style=\"font-weight: 400\">The Late Mattia Pascal<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400\">In <\/span><i><span style=\"font-weight: 400\">The<\/span><\/i> <i><span style=\"font-weight: 400\">Late Mattia Pascal<\/span><\/i><span style=\"font-weight: 400\">, &nbsp;the main character in the novel is invited to the Tragedy of Orestes performed by \u201cautomatic dolls\u201d in a marionette theater. Suppose, the character muses, that the puppet playing Orestes, at the moment of avenging his father&#8217;s death by killing his mother, suppose he were confronted with a little hole torn in the paper sky of the scenery. What then? <\/span><\/p>\n<p><span style=\"font-weight: 400\">At that point, hypothesizes the character, Orestes would be overwhelmed: <\/span><\/p>\n<p><span style=\"font-weight: 400\">&#8221;Orestes would become Hamlet. There&#8217;s the whole difference between ancient tragedy and modern . . . a hole torn in a paper sky.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400\">This is the spot where Pirandello&#8217;s characters are often trapped, conscious that they&#8217;re moving within cartoon scenery, but unable to leap out. Addressing a conundrum which he calls &#8221;the clumsy, inadequate metaphor of ourselves&#8221;, he hints to themes that would then become central in postmodernist thinking, in which humans with confused, fragmented identity descend into a labyrinth where reality and fiction become increasingly difficult to separate. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Haraway has argued that humans had to come to terms with multiple decenterings, which inflicted human narcissism successive wounds: the Copernican wound \u2014 the decentering of the earth from the center of the universe, then the Darwinian wound, the decentering of humanity from the centre of our organic life, the Freudian wound, decentering of consciousness, and a fourth wound would be the synthetic\u2014 the decentering of the natural from the artificial, so that the liveliness of technological entities has had to be accommodated (Haraway, 2003).<\/span><\/p>\n<p><span style=\"font-weight: 400\">In this last decentering, the age of our mechanical reproduction, we might find ourselves lost and speechless. I suggest that to establish a truthful conversation with these uncanny others we must first reconsider what we have long been accepting as representations of our humanness.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The aim then should not be to reject artificial agents as valuable mean to analyze and make sense of the messy real, but to encourage a comprehensive understanding of intelligence beyond imitation and towards the integration of viewpoints, however diverse. Again, with Massumi:<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u201cThe thrust of the process is not to become an equivalent of the &#8220;model&#8221; but to turn against it and its world in order to open a new space for the simulacrum&#8217;s own mad proliferation. The simulacrum affirms its own difference. It is not an implosion, but a differentiation; it is an index not of absolute proximity, but of galactic distances.\u201d (Massumi, 1987)<\/span><\/p>\n<p><span style=\"font-weight: 400\">The ontological divide we have imagined between ourselves and the non-human world is not nearly as impassable as we have been led to believe. This recognition, however, must be framed not in terms of a granting to the other what we think ourselves to be, but by a radical reconfiguration of how we even think of ourselves in the first place. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Turkle reminds us that \u201c[&#8230;] objects with no clear place, play important roles. On the lines between categories, they draw attention to how we have drawn the lines\u201d (Turkle, 2005). The challenge of figuring out meaningful ways of interacting with multiple, puzzling, undefined others might spur the research of a better understanding of intelligence, beyond the human. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Comprehending a &#8220;new reality&#8221; in which human beings occupy a universe populated by non-human subjects requires a theory which entails &#8220;an increase in the vigilance, responsibility, and humility that accompany living in a world so newly, and differently, inhabited&#8221; (Wolfe 2010). We must relinquish our sense of bounded identity and fixed categories to understand and communicate with frightening \u2018others\u2019.<\/span><\/p>\n<p><span style=\"font-weight: 400\"> &#8212;&#8212;&#8212;-<\/span><\/p>\n<p><span style=\"font-weight: 400\">Agre, P. (1997). Computation and human experience. Cambridge University Press.<\/span><\/p>\n<p><span style=\"font-weight: 400\">G\u00fczeldere, G., &amp; Franchi, S. (1995). Mindless Mechanisms, Mindful Constructions. Constructions of the Mind, Special issue of the Stanford Humanities Review, 4(2).&nbsp;<\/span><span style=\"font-weight: 400\">Chicago<\/span><\/p>\n<p><span style=\"font-weight: 400\">Haraway, D. (2010). When species meet: Staying with the trouble. Environment and Planning D: Society and Space, 28(1), 53-55.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Massumi, B. (1987). Realer than real. Copyright no, 1, 90-97.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge university press.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Turkle, S. (2005). The second self: Computers and the human spirit. Mit Press.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Turkle, S., Breazeal, C., Dast\u00e9, O., &amp; Scassellati, B. (2006). Encounters with kismet and cog: Children respond to relational artifacts. Digital media: Transformations in human communication, 120.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Wolfe, C. (2010). What is posthumanism? (Vol. 8). U of Minnesota Press.<\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The dead parrot and the hole in the paper sky Messy thoughts on humans, machines, language and trust. A blog post written by Stefania Santagati, a DIM student at the [&hellip;]<\/p>\n","protected":false},"author":78,"featured_media":3480,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","ngg_post_thumbnail":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":true,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1,17,51],"tags":[],"class_list":["post-3476","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","category-research","category-blog"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2017\/12\/COLOURBOX17412829.jpg?fit=4000%2C2250&ssl=1","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts\/3476","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/comments?post=3476"}],"version-history":[{"count":7,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts\/3476\/revisions"}],"predecessor-version":[{"id":3499,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts\/3476\/revisions\/3499"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/media\/3480"}],"wp:attachment":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/media?parent=3476"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/categories?post=3476"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/tags?post=3476"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}