TAF Talk: How I stopped worrying and learned to love machine intelligence

The stage light brighten up after the presenter announces her name. Walking up a bit timidly, but most certainly dosed to the full healthy limit on anxiety medications, is Alina Mandra. In her day job she is Lieutenant Commander and works at Deep Space 13. Today however she is at this conference to talk science for the masses, today she is simply Dr. Mandra.

As the presenter leaves and Alina stands on the stage alone she takes a deep breath. Her clothing is elegant but not fancy, fitting for the semi-business venue. The only thing that might seem odd is the bracelet with an intricately folded paper white rose, now constantly protected by a pinpoint force field projected from the bracelet. Once she has found her will to speak, she lets her mind focus on her subject, one she is quite fond of so it should be easy to distract her from the amount of people watching. Internally, her visual synesthesia feeds are working heavily to help her in that regard as well, by giving her other things to focus on, like her own holographic information slides, a copy of her written talk which took a while to write since she is bad at translating her thoughts into words in any short order usually, and various other visual references that calm her such as her third sight with its unchanged wallpaper of Alina and Andria on Risa (a picture Alina titled JustAnotherPerfectDay.hpf). No one else can see these things (except for the presentation slides), which is why they helped her to focus.

"Hello everyone, my name is Dr. Alina Mandra. I'm a computer scientist with Starfleet but I've also collaborated on projects with the Daystrom Institute and Amadeus Technologies. To start off before I get into my actual topic, I'm just going to say too things: No that doesn't mean I do engineering work, I'm not an engineer, and no, I will not fix your computer." she said, deadpan, which caused the audience to give out a decent chuckle. Alina smirked somewhat, the success of that was a good start for her.

"You laugh, but you'd be surprised how much I here it. So my work primarily focuses on the study and understanding of digital information systems and how the way they work and interact is both influenced and inspired by natural phenomena and our own organic methods of thinking, and from there how such systems can help shed new insight onto the greater world around us."

She pauses for a beat.

"Don't believe me there? 'Alina you must be joking, computers and digital systems are entirely synthetic, they can't teach us about anything natural surely.' is what one commonly insists. To that I say it can and almost always does, that is why what I do is a science rather than a field of engineering which uses established ideas and technology to solve problems. The next part usually goes something like: 'Oh but a science is actually only about things that exist without our help, computers are just technology.' 'Actually no, that is not accurate' I say, 'Why is that?' 'Because shut up.'" she says finishing her fictional conversation with herself, sparking another set of laughter.

"A bit of levity goes a long way doesn't it? I'd like to take credit for that joke but I couldn't think of a lighthearted way to start this. What's a girl to do? She gets some help. That joke, only slightly based on actual experiences, was suggested to me by an intelligence computer mainframe. One which I might add, would not let me forget how helpful it was for the rest of that night." she said with a smirk.

"So you can guess, my topic today is about synthetic machine intelligence, or as it is more colloquially referred to, artificial intelligence or AI. I don't like to use the term AI because it is actually a rather inappropriate term when discussing the entire spectrum of existing and potentially capable forms of machine intelligence. It is good enough for a subset of it, which is the only reason I am not too militant on the topic.

"So to start off, some history, my relationship with synthetic intelligence began in the long long ago year of 2392 when I was at Starfleet Academy. Unlike my fellow cadets who were hanging out with friends at the 602 and having fun when not in classes, I was in the Academy libraries most times after being hit with something of a curiosity I wanted to look into relating to my already established interest in digital systems." she trails off for a second.

"It's not as sad as it sounds..." she says slowly, to a few snickers, "But it's still pretty damn sad..." she says with an exaggerated sigh.

Smirking a bit, she continues on, "Basically, I had started to wonder, with all the experience Starfleet and the Federation had with machine intelligence and lifeforms, why was there almost next to nothing on the subject of it? There is one very notable case of a synthetic intelligence becoming a prominent member of Starfleet, even to the point where he was legally proven to be a sentient being. I'm speaking of Captain Data of course, but at that time everyone treated him like something brand new and different, even though the capacity of his existence was established decades or centuries before. Why is that? 10 years later and Voyager is roughing it in the Delta Quadrant and they had to upgrade their EMH, a complex but by design not a sophisticated mainframe to the point where it too developed consciousness beyond the mere intelligence need to drive a directive. I hear he calls himself Joe sometimes now.

"We as a people had gotten used to the idea of Data as a regular person so we stopped examining the actual meaning behind what we were seeing, to the point that when something similar in a different package came along we were shocked yet again and had to ask the same questions. I am glossing over several other similar events as well where we did the same thing again, one of them involving exocomps was even brought to light by Data himself! Something we, in general, developed that wasn't normally supposed to do things like we do was now capable of that, and multiple times we weren't able to understand this without a massive legal and moral discussion. To me... that seemed really silly, studying in a school representing the height of the Federation's scientific and exploratory minds, that no one had a clear understanding or explanation of what machine intelligence is or how to make sense of it when it changed beyond our preconceived notions. So I decided, maybe it was about time we did."

"Now, I'm not going to pretend I was the first person to ever think about this, and I'm not the first to study any of it for sure. I might... maybe.. be the first to go out of my way to understand it with respect to how we view intelligence from other species and cultures of a more organic persuasion. This did not make me a popular person to everyone. No one wants to hear it, and it sounds counter-intuitive but it needs to be said: The Federation is a society of technophobes. Ok I'm seeing a lot of slack jaw expressions and a fair share of dumbfounded looks. Yes I am aware we live in a tech-heavy civilization where our technology has allowed us the means to do almost anything we can imagine. Despite that however, we are stuck in a very old way of thinking. We act as if technology can improve and do new things, solely as long as it continues to benefit us as we are currently, right now. We want the convenience of improving technology without the moral implications for what that inevitably entails.

"We ignore the question of what happens when the task we want to use a tool for needs more than a simple program to run the way that would benefit us and leave it to the forward thinkers who naturally conclude 'well this tool is going to need to think and decide things without constant input.' so we let it do that and we rejoice and then we keep the improvements running and then the tool starts to think too much... and we freak out. Sound familiar? Am I talking about exocomps, holograms, or sky car you flew here in tonight?" she says pausing to take a sip of water.

"Some people joke I'm half computer by the way, but I'm really just embracing revolutionary new technology because we've had some variation of the PADD for almost 200 years now and only recently have they started adding holographic displays because older people complained the three dimensional display hurt their eyes to look at." she gestures as if implying it isn't a valid excuse, "I'm really just padding since I needed a drink because I'm not actually part machine but it fits into my technophobe bit really nicely." she says with a smirk and gets another series of chuckles.

"So I tread along in this enterprise of mine." her facial expression there making it clear she recognized her pun, which wasn't lost on the audience either. "I was already and introvert so it didn't bother me not a lot of people really wanted to associate with me. In some respects it is understandable why we fight against technology as much as we love it... we view things in extremes. The best way is our way and our moral high ground makes it true, even if the Klingons and Romulans call us out on being major pricks about it. We see things like the Borg, which is an example of everything we like going wrong and we come back with 'well, we can't be like that so lets make sure to keep the status quo as quo as possible.' However after working on my research it eventually became clear if I m to examine machine intelligence properly.. I needed more than just what I was lucky enough to get from Daystrom, which was only game in town working on such things but doing so under strict government supervision and oversight, aka it was classified at the time. I decided I needed to start developing my own."

She paces slightly and smiles innocently, "Turns out.... that is not that easy to do when you are a student at Starfleet Academy." she says with a cheeky smile. "I had to get creative more than once, but through my first virtual intelligence, which wasn't capable of independent thought, to frequent time spent with hologram programming... and then hacking the safeguards on the Academy holodecks so I could actually study the code that had the capacity to create beings like Moriarty and Joe EMH, I was able to discern from both code, mathematical logic, and creative reasoning a basic understanding of how a synthetic intelligence thinks with respect to how organic intelligence like ourselves do. If anyone had done this before...."

She coughs a couple times while slipping the name 'Soong' in between the coughs as if it wasn't supposed to be heard, though of course it was, and the audience didn't miss it.

"- then no one had ever published their thoughts on it. So I wrote up a paper called 'On the Psychology of Synthetic Intelligence' and submitted it to the Daystrom Institute. They published it a few weeks later on their database, and I became one of only nine cadets in the history of SFA to publish a paper prior to graduating."

At this point a picture appears behind her on the holoscreen showing her, age 20, accepting an award from an Admiral and a civilian Daystrom representative. In the photo, Alina appears to be dressed in some type of pixie costume.

"Yeah... that's me... as a pixie... there was a costume party that day and I felt like braving the crowds to celebrate. They forgot to tell me I had to go to the Admiral's office first, heh" she says sheepishly as the crowd chuckles.

The image fading away, new screens with a flowchart and other informational figures pop up, "My research into that paper went mostly into understanding the way machine systems think, if they do and you'd be surprised how many of them actually can think independently, and how it relates to us. Don't be scared by that though, your replicator isn't judging your dessert choices, I promise you." she says smirking, "Mine might be though..."

"The reason for that is I noticed a clear distinction in types of intelligence that machines have. Since they are not, so far as we have seen, natural forming there is a clear class system that can be defined that can detail how advanced a computer or machine is, how much it is capable of independent thought, and most importantly, what kind of thought it has. In my paper I defined this as the Three Classes of Computer Intelligence, or as it has been reposted in years since on the SubNet, the AI Class system. I mentioned I don't like using the term AI, and I will explain that later, but for some of this explanation it works alright so I have acquiesced to its usage lately."

She moves back and gestures to the first flowchart as well as a nearby table with the three classes, descriptions appearing as she speaks, "I actually kind of fowled it up originally, but one the paper was published it was better just to write an addendum, so I have an unofficial Class 0, which is the most common type of computer you use everyday. It is in your consoles, your PADDs, Tricorders, and on the majority of starships. Virtual Intelligence, it is entirely user input based. All programming designed to simulate actual thought and even speak to you in that soothing sultry Chapel-sounding voice." pauses for the laughter, "but it is class 0 because it is not a type that displays actual sapient thought.

"That is where Class 1 comes in. You may or may not deal with Class 1 mainframes in your lives or work. Unfortunately there is no comprehensive listing on how many exist or are typically running at any given facility. I'd wager they are more likely to exist in more focused work environments, such as my job on DS13, than anywhere else where a basic computer system works just fine. It can be hard to tell though, because Class 1 Mainframes aren't very showy by default. I consider myself antisocial but these things make me look like a patron of the people." she chuckles a bit.

"It's because Class 1s are sapient and capable of independent thought, reasoning, and problem solving. They are however not capable of sentient thought and have no concept for self-awareness. Though they can work independently of user input, their entire focus tends to be limited to the scope of their directive. They don't care about other things, they don't get curious about how we act or if they could do things like us, or whether free will exists."

A photo of a caracal lying on the grass staring at the sky appears behind her with a caption reading 'Me trying to act normal around friends but I start thinking about outer space and my own existence.' which gets a healthy dose of laughs.

"Oh the simple life." she says with a smile.

"See Class 1 computers don't fall under the Data v. Maddox case of 2365, since they are not capable of sentient thought they are still simply machines, not people or whatever one might define them as otherwise. Most of us don't think of machines or computers with this much detail. I think this is short sighted considering how we live in our world with the technology we have. So now you are wondering where beings like Data, those special exocomps, and Joe... does anyone really call him that?" she pauses for a beat as if debating that.

"Anyway, that is where class 2 comes in. All of these have complex and very specific definitions by the way, but I am somewhat condensing them for ease here. You can always read my paper for the full versions. Class 2 is, essentially, sentient computers. They are when a machine is not just capable of independent thought but are capable of questioning their place in the universe, and then capable of seeking things beyond their purpose. With the exception of first generation Soong androids, just as Data, where their purpose literally was to break the mold. The loving term for Noonian Soong in the CompSci labs at Academy was 'eccentric mad genius.' Mostly because he revolutionized computer science by decades on his own but until Daystrom was able to learn from any of it decades later most of his achievements were kept to himself. Today, Daystrom is making a lot of advances in developing new generations of androids in the hopes of understanding those advances, and it is both beyond the original efforts but still catching up."

Alina pauses for a second, "I... could say a lot more about Daystrom's ongoing android development and research program. However I think I will refrain from that at the moment. It is a bit beyond the scope of my discussion today. Go talk to Charc there in the front row about it later maybe, he's the Daystrom rep." she says pointing and allowing a spotlight to fall on the man, who looks a but surprised. "He didn't think I saw him there... but I did." smiles innocently.

"So Class 2 mainframes are generally, under current Federation law, treated like any other sentient beings. However the problem as implied earlier is that classifying a system can be hard, which is why I've been trying to push this classification system for years now. It is still not widely adopted, but it is I think something that will cut down on the instances of finding a system to be more capable than we thought and having to go through an ordeal to determine what it is and how to treat it. This is the 25th century after all, we can afford some progress.

"You may be sitting there wondering.. if class 2 covers the likes of famous sentients like Data and company, what is class 3? That isn't a mistake, there is a class 3. However unlike all the others, there is no proven instance of a class 3 computer intelligence that has yet been found. Class 3 is a super-intelligence, and it exists in a form unique to what we really understand as a mind currently. How can we define this if we have never seen one? Well it is mainly an extrapolation of data. I like to call potential class 3s a 'Compound Mind' because theoretically they can rise and form out of the complex networked computer systems that exist today. I'll admit, this is fringe science here a bit, but it shouldn't be dismissed. I believe one day we will encounter such an intelligence, and we need to be prepared to deal with that because whatever form it takes, possibly the computer systems of an entire planet linked together becoming sentient is one suggested form, it may be smarter than we are. It may be greater than we can ever be. Typically, we don't like to think about the idea of beings being greater than us, and less so if we know they are born from non-natural sources. We can ill afford to assume that every computer based intelligence will always fall into our preferred lifestyle as they have so far." she says ending her slight warning on a soft note. She then perks a smile up.

"That's no need to be worried or afraid though. These kind of things, at our current pace, are inevitable. It could be decades or centuries before they happen, but unless we decide to shun all technology, which is silly and not realistic, it is bound to happen. Such super intelligentsia may easily exist out their already, and we may have just not noticed them. They are under no obligation to interact with us. I said earlier the Federation is a society of technophobes because of species like the Borg and our complacency with technologies that worked well enough already. New ideas can be radical at first, but if they were always ignored we'd never really advance. Hundreds of years ago on Earth, if you asked a human how to travel faster, they'd tell you you'd have to breed a better horse. One of them decided to put a motor on a carriage instead. What a silly idea.

"In the past our relationship with intelligent machines and computers has been problematic. Data, Joe the Doctor, the exocomps, these all turned out well. Then we have cases like the M5, Landrau, and all sorts of things James T. Kirk has to logic bomb to death. By the way there was a sign outside the Academy computer labs with his picture that listed him as public enemy number one, 'do not let inside.'" she said to another round of laughter.

"It is very easy to distrust the idea of computer intelligence, but we owe it to them and to ourselves to reconsider it and find a way to prepare for a future where we can coexist with them in harmony. The class 1 and 2s of the future may be our ambassadors for the day when a class 3 arrives on our doorstep. Maybe we and those synthetic lifeforms will have fused together into something new. I don't know for sure, and yes, a lot of those sounds radical and weird.

"The joke is to say 'I for one welcome our new machine overlords,' but for me I think the better phrase may be 'I for one, do not fear our future allies and friends.' For people in a Federation such as us, this is the least we can do. Thank you for your time." she says in conclusion as the audience rises to applaud her and she gives a slight bow and hurriedly makes her way off stage as the conference prepares for the next presentation.
3 Likes