Myth, AI and Risk

A point that is touched upon in the Remain Relevant course is the element of myth as it pertains to and shapes our beliefs (i.e. in this particular context, what stories do we believe and/or tell ourselves regarding AI, robots, etc. and how do those stories serve or harm us).  The link below takes us to a piece on Medium that dips into the pool of myth in this space and also hearkens back to an earlier blog post on hyperbole:

Artificial Intelligence Isn’t as Autonomous Nor Intelligent as You Might Think

Quote of Note:

“Sadly, at the end of the day, the current prominent AI debate is a spectacle that perpetuates confusing myths. That’s unfortunate because we could be paying attention to the far more pressing, nuanced issues around AI instead.”

Lest we run away with ourselves … I love looking at this kind of well-researched article intended to get us to stop, take a breath, and cease looking over our shoulder for killer drones in our wake.  It is wise and good that we remain as objective as possible and resist the visceral tractor-beam pull of hyperbole.

Still.  It is also wise and good to evaluate probable risk and to consider the functionof myth, story and fiction.  Myth, as I mentioned in the Development Days keynote, was the first “science”.  Science is how humans make sense of the world and explain why certain things exist. Prior to scientific experimentation & observation, myth served that function.

Myth was (and is) a method of conveying human and natural truths by way of weaving and sharing compelling narratives that have endured the test of time.  In this way, myth serves us if we are wise enough to heed the lessons embedded within.  But … we’re human.  We rarely learn from our own mistakes, let alone the mistakes of others … this is triply true of myth and fiction.

As this article serves to demonstrate, the myths surrounding AI are often dismissed and shunted aside as syrupy kid-stuff.  Not so. This stance ignores the fact that science fiction is the stage upon which many innovative ideas (including AI &robotics) are trotted, tried and tested.  Fiction, perhaps … but for how long? 

What we have now is Narrow-AI … AI that performs a given function very, very well … but it can only perform that task.  An automated car cannot also do your taxes (yet).

Per the article, the most common depiction of AI in science fiction is AGI (Artificial General Intelligence) – AI that is as smart as a human and also capable of extending learning/actions across activity “silos” (e.g. the AI could whoop you at chess and then suggest a stock pick or recipe for dinner).  This is for very good reason.  If you want to bore an audience, tell them a story featuring narrow-AI … AGI is much more compelling.  Watching R2-D2 in action is far cooler than a story about an automated car.  (Don’t get me started on C-3PO.)

The thing is, AI doesn’t have to achieve the state of AGI to adversely impact your livelihood. Traditionally, another human would have to be hired to assume your responsibilities and disrupt your job. No more.  AI is capable of performing an increasing number of tasks, which affects myriad jobs.  This trend will only grow as AI is fed increasing amounts of data.

The good news is, there is time to prepare for such an eventuality and adopt new skills that will help us adapt, evolve & pivot to meet the challenges of an increasingly automated and complex world.

Do you want to Remain Relevant in the Age of Automation?  If so, please have a look at the FastFulcrum courses that provide the substrate skills needed to do so:

Related Articles