We are now onto the fifth short story of nine in Ted Chiang’s collection Exhalations. This one is a very short one at only a couple of pages, but despite its brief length, it explores some of the most fundamental issues facing us as a society today: technology, children, love, and the meaning of connection as all these elements fuse together. It was not my favorite story so far, but it is certainly interesting, especially in light of the previous short story Lifecycle of Software Objects (which in case you missed it, you can read more analysis here).
Some further quick notes:
- Want to join the conversation? Feel free to email me your thoughts at danny+bookclub@techcrunch.com or join some of the discussions on Reddit or Twitter.
- Follow these informal book club articles here: https://techcrunch.com/book-review/. That page also has a built-in RSS feed for posts exclusively in the Book Review category, which is very low volume.
- Feel free to add your comments in our TechCrunch comments section below this post.
Reading Dacey’s Patent Automatic Nanny
Chiang has constructed a creative framing device here: we observe a peculiar machine — Dacey’s Patent Automatic Nanny — in historical retrospective within the context of an exhibit entitled “Little Defective Adults — Attitudes Toward Children 1700 to 1950.” The entire story is essentially the museum placard next to the mechanical artifact describing its background and how it was designed to raise an infant without the need for a human nanny.
Much like in the last short story we read in the collection, the question of human connection mediated by technology is at the core of the story. Can we raise a child purely through a piece of technology? Chiang seems to take a definitive stand against such a notion, showing that the child’s psychosocial development is hindered by its nearly exclusive interaction with a non-human being. The author even plays a bit of a legerdemain right from the beginning: the exhibit’s title of “Little Defective Adults” could be applied to robots just as much as Victorian-era children.
But like the digients in the last story, we later learn that the child at the center of the story actually has fine interaction skills, but with robots instead of humans. As the automatic nanny is removed from service after two years of raising Lionel’s child Edmund, the child experiences stunted development. His development is rekindled once he has access to robots and other electronics again. Per the story:
Within a few weeks, it was apparent that Edmund was not cognitively delayed in the manner previously believed; the staff had merely lacked the appropriate means of communicating with him.
And so we are left with a continuation of the major questions from the last story: should human-robot interactions be considered equal to human-to-human interactions? If a child is more comfortable interacting with an electronic device instead of a human, is that just a sign that we privilege and value certain interactions over others?
It’s a question that is expounded on much more comprehensively in Lifecycle of Software Objects, but remains just as interesting a question here in our increasingly digital world. We are about to launch a multi-part series on virtual worlds tomorrow (stay tuned), but ultimately all of these questions boil down to a fundamental one: what is real?
Outside of that theme (which veers into philosophy and isn’t deeply meditated on in the couple of pages of story here), I think there are two other threads worth pulling on. The first has to do with the variability of human experience. This whole experiment begins when Lionel’s own father Reginald decides to replace a human nanny with a machine to provide a more consistent environment for his child (“It will not expose your child to disreputable influences”). Indeed, he doesn’t just want that consistency for his own child, but wants to clone the automatic nanny for all children.
Yet while Reginald feels that human nannies are defective, it is really the automatic nannies themselves that are impoverished. They lack the spontaneity and complexity of human beings, preventing the children in their care from handling a wider variety of situations and instead pushing them inward. Indeed, women (aka mothers) intuitively understand this dynamic: “The inventor [Reginald] framed his proposal as an invitation to partake in a grand scientific undertaking and was baffled that none of the women he courted found this an appealing prospect.”
And yet, human contact is precisely what drives the continued pursuit of these robots in the first place. The nanny’s original inventor, Reginald, uses it on his own son Lionel, who wants to prove their utility to the world by using it on his son Edmund. So we see a multi-generational pursuit of this dream, but that pursuit is driven by the human passion to defend the work of one’s parents and the legacy they leave behind. Human-to-human contact then becomes the key driver to prove human-to-robot contact is just as effective, debunking the very claim under consideration in the process. It’s a beautiful bit of irony.
The other thread to untangle a bit is the scientific method and how far astray it can lead us. Reginald’s creation and marketing of the device is undermined by the fact that he never really performed any real experiments on his own child to evaluate the quality of different nannies. He just makes assumptions, based on his Victorian values, and pursues them relentlessly before heading back to pure mathematics, a field where he can be at ease with his models of the universe.
In the middle of this little pattern is a common lesson: sometimes the things that are least measurable have the greatest influence on our lives. This story — like the exhibit it’s a depiction of — is a warning, about hubris and failing to listen and love.
The Truth of Fact, the Truth of Feeling
Some questions to think about as you read the next short story, The Truth of Fact, the Truth of Feeling:
- What is truth? What is honesty?
- How do the two frames — an historical one about the Tiv and the “contemporary” one about the remem technology — work together to interrogate what truth means?
- How important is it to get the details right about a memory? Does a compelling narrative override the need for accuracy?
- Do different cultures have different approaches to storytelling, narration, and universal truth?
- Does constantly recording photos and videos change our perception of the world? Are they adequate representations of the truth?
- How important is it to forget? Memories are supposed to fade with time — is this fundamentally conducive to humanity or harmful?
- Will we fact check each other’s behavior more and more in the future? What consequences would such a future bring?
from TechCrunch https://ift.tt/2VbUC17
0 comments:
Post a Comment