Do We Want Artificial Intelligence or Augmented Intelligence?

Reflections after using the Lumos smart bike helmet

Antoine RJ Wright
Humanizing Tech

--

I. Setting the Stage

I pushed the pedals on another unseasonably, warm day at the end of 2016. For those who are more serious about cycling, the season only slows down the pace of riding. Having moved back north, hibernate is more or less my cycling speed. But, I had reason to get out besides rekindling the love between my lungs and gears; there was a new device on my head whose beta feature did something not unlike my car — brake lights when I stopped.

The helmet is the product of a successful Kickstarter campaign. Lumos began shipping in the fall and having received mine right before Christmas, I was anxious to put it on and see how effective it would be. Lights on the head makes sense. I could see the impressions from drivers coming towards me due to the uniqueness of its placement. What I didn’t expect was the recognition behind me, especially as I slowed down.

Lumos smart bike helmet

You see, there’s a beta feature that one can enable that uses the accelerometer within the helmet to sense when you have slowed down considerably and make the rear lights work like a brake light. Noticing the drivers who adjusted their approach to me from behind was not something I expected. Nor were my next thoughts:

I don’t want technologies to think for me. I want them to augment the parts of me which would be best improved by aspects of the tech’s strengths.

II. The Debate

There has been a good deal of conversation and debate in various tech circles around the topic of artificial intelligence. Many people with several more letters in their names and programming disciplines in their trails would be better able to explain the various shades of this discussion.

That said, I did want to get it off my chest a bit that the artificial intelligence side of the discussion isn’t all that appealing to me — mainly because it’s an intelligence of which that I’m not in control of, and benefits someone else’s abilities before (if ever it does) my own.

Don’t get me wrong, I get the appeal of many in this artificial intelligence space. It was a read of Jaron Lanier’s You Are Not A Gadget that increased my awareness of the potential of intelligence engines, and also the frailty of thinking/producing tools which augment intelligence outside of the domain of the participant/user. There truly is something god-like when a tool is able to pull into your space dimensions of reality that you might not be as attuned to.

In a sense, I fall into a naïveté of not wanting all of the sausage making to be hidden from me.

III. Correlation Versus Identity

Perhaps part of the issue that I have is more on the semiotics side of things. Before I went on that bike ride, one of my reads was on the reframing of the identity discussion. Within that piece, the suggestion was made to reframe the topic using the word correlation versus identity. It’s a decent case for doing so, and hence got me thinking about the other terms passed about and if they measure into the thing we are actually discussing.

Artificial Intelligence really is about machines thinking, concepting, and connecting without much in the means of humane intervention. Most of these models start with a series of algorithms that “learn” based on some frequency and depth of datasets being thrown at those algorithms. It makes sense if what you need is speed in connecting an amount of data that’s beyond most people’s pure or trained abilities. Using algorithms to create relational reports/dashboards or “see” objects within images falls into this framing.

Augmented Intelligence I see as something a bit different… the helper if you will. In the example earlier, it’s the helmet not just knowing when I am pulling on the brake to ignite the lights, but learning the routes I ride and turning on the brake and turn signal lights without my intervention. It’s the helmet talking to my Apple Watch and alerting me towards my technique in riding (pulling the brake, pushing too hard on hills, etc.) or adding a tap to my wrist when I am stopping as an additional indicator to me that my speed has changed and the lights are working. It’s taking a sense that I have, and one that’s diminished due to context, but expanding it to allow me freedom of other types of movement/interactions.

If the goal is really that connectivity allows us to make better use of our humanity, why doesn’t that look more like extending my senses than embedding external boxes with intelligence too intertwined for me to understand?

Antoine RJ Wright

Read More

--

--

Designing a cooperative, iterative, insanely creative pen of a future worth inveinting between ink & pixels @AvanceeAgency