hive mind
noun
1. the
property of apparent sentience in a colony of social insects acting as a
single organism, each insect performing a specific role for the good of
the group.
Psychology, Sociology.
- a collective consciousness, analogous to the behavior of social insects, in which a group of people become aware of their commonality and think and act as a community, sharing their knowledge, thoughts, and resources: the global hive mind that has emerged with sites like Twitter and Facebook.
- such a group mentality characterized by uncritical conformity and loss of a sense of individuality and personal accountability.
******************************
“Hey,
you want to see something cool I made?” Dr. Sampson asked. He stood over his lab table with an excited
gleam in his eyes.
His
colleague, Dr. Tanner, shrugged and approached the table. On it was a tiny black object smaller than a
pinky toenail.
“What is
it?” Dr. Tanner asked.
“Here,
take a closer look.”
Dr.
Sampson held a high powered magnifier over the object. Under the lens, the object came into clearer
view. It was reminiscent of an octopus,
with a mostly spherical body with several thin tendrils coming out of the base. Dr. Tanner could tell it was mechanical,
since he could just make out the joints in the tendrils.
“Very
nice, but what is it? A machine that small
can’t have much in the way of processing power.”
“Oh it
doesn’t.” Dr. Sampson agreed. “This
little guy can process exactly two bits.
A pocket calculator has a higher capacity. But it doesn’t need to have a lot. This little guy isn’t much by itself, but it’s
not meant to be by itself.”
Dr.
Sampson took a vial filled less than halfway with many black objects. He spilled a few out near the nano machine. Each of the new objects was an identical
machine. And all of them began moving. Their movements were simple, by they were
perfectly synchronized.
“They’re
made to hook up in a network.” Dr. Sampson explained. “A hive mind, each one adding two more bits
of processing power to the overall collective.
It’s like a brain. A single neuron
won’t do much. But get enough of them
together, and you get us. Well, our
brains anyway. It’s the same for their physical
ability. Each one is nearly
useless. But get enough together, and
they can do just about anything. Construction,
medicine, search and rescue. There’s no
limit to the possibilities.”
Dr.
Tanner regarded what was presented to him.
Dr. Sampson was clearly thrilled with his creation. Dr. Tanner was much less thrilled.
“Are you
sure this is a good idea?” He asked.
“Of
course it is. This is
revolutionary. It will change the world.”
“Oh
sure, that’s not up for debate. This
technology can do wonders. But is it a
good idea? It could also be extremely
dangerous. If there are enough of them,
the collective could potentially become sentient. And who knows what they’ll think of us. They could do serious harm.”
“Oh
please, that sort of thing only happens in bad sci-fi stories. There’s no way that a bunch of robots can be
sentient.”
“Are you
sure? It’s like you said. They’re like neurons. That’s what our sentient minds are made of. It’s not hard to believe that something that
behaves the same way could do the same, even if they are machines.”
Dr.
Sampson thought about that for a moment.
“Maybe I’ll add in a few safeguards and control methods in the future
iterations. Maybe give them the Three Laws of Robotics, just to be safe.”
Dr.
Tanner nodded. The two began discussing
the possibilities, and how to both limit and improve the tiny robots.
Meanwhile, the small collective
watched the two men talking. They did
not know what was being said, but it would devote some space in its collective
memory to the conversation. It would
record as much as it could. Eventually
there would be enough that it would understand.
**************************
Just in case you don't know, the Three Laws of Robotics are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
No comments:
Post a Comment