Log in

View Full Version : Ex Machina



Brandon's Impotent Rage
18th May 2015, 06:57
Yeah, so whose seen this movie?


http://www.joblo.com/newsimages1/posters-image-4.jpg


I've been hearing nothing but good reviews across the board. It's the first movie that Alex Garland (author of "The Beach", writer of films like 28 Days Later and Sunshine) directed as well as wrote. It's a sci-fi thriller that's apparently also rather cerebral (kinda like Blade Runner). There's themes like AI, reason versus emotion, sentience and humanity, feminism, and some other themes.

It's a real godsend in an age of remakes and crappy sequels.

Os Cangaceiros
18th May 2015, 07:54
Is that the movie with sexy robots? I've been seeing previews for it on TV. I've been referring to it as the "sexy robot movie".

oneday
12th July 2015, 02:10
I'm still trying to figure out if she passed the Turing test or not.

ckaihatsu
12th July 2015, 02:27
I'm still trying to figure out if she passed the Turing test or not.


The Turing test is bullshit -- people can *pretend* that decent natural-language capabilities are enough to be "fooled", but personhood is a little more complicated....





[W]hile science is increasingly able to pick-apart the inner workings of the brain, and of all of biology, that doesn't mean that organic life is machine-like. The greatest difference is that we're *socialized*, and so brought up within certain cultural norms, while also growing our own individuality within those social frameworks.

An inorganic *construction*, by comparison, has a wholly different social -- and technical -- background to its existence. Its "lineage" is one of tools and mechanistics, and its acceptance in a social context is for that of a tool. We could *pretend* that it's an independent entity, and it could even be programmed to *simulate* much human-like behavior (see 'ELIZA' below), as in the narrative you posted, but in the end everyone would want to know its *background*, just as with being introduced to any person, and society would readily know that it's a simulation and does not in fact have its own actual cognition.

http://en.wikipedia.org/wiki/ELIZA

oneday
12th July 2015, 03:04
The Turing test is bullshit -- people can *pretend* that decent natural-language capabilities are enough to be "fooled", but personhood is a little more complicated....

I disagree, you're missing the enormous social background that a machine with natural-language capabilities advanced enough to fool people would imply.



An inorganic *construction*, by comparison, has a wholly different social -- and technical -- background to its existence. Its "lineage" is one of tools and mechanistics, and its acceptance in a social context is for that of a tool. We could *pretend* that it's an independent entity, and it could even be programmed to *simulate* much human-like behavior (see 'ELIZA' below), as in the narrative you posted, but in the end everyone would want to know its *background*, just as with being introduced to any person, and society would readily know that it's a simulation and does not in fact have its own actual cognition.You just stated that the machine could not pass the Turing test, everyone would know it was a machine. That's why the Turing test isn't bullshit (and their has been no machine that has passed yet). The ability to fool humans is a pretty powerful test implying a social background.

The machine would would have to have been around humans for awhile, and have been exposed to many social situations. Probably this means that the machine would have to be raised as a human and have a body.

The ability to construct a machine that actually could be raised this way and ever pass the test is well beyond our current capabilities, and it's an open question in computer science whether a human (or enough of human-ness to pass the Turing test) could actually be simulated by a Turing machine (what all of our current computers are).

Sewer Socialist
12th July 2015, 03:12
I think the movie used the Turing test as an metaphor for the approval women need to get from men as a condition to have access to male-dominated institutions.

Whether she passed or not in that sense is irrelevant, because she escaped.

But in another sense, she failed because she had no knowledge of the outside world. If humanity is a social being, a learned thing, then Ava was severely lacking in this, locked in a basement her entire life. But in escaping, she gained the access to the world which could provide what she lacks.

ckaihatsu
12th July 2015, 03:39
I disagree, you're missing the enormous social background that a machine with natural-language capabilities advanced enough to fool people would imply.


You're implicitly admitting that the technology *doesn't* exist.

I'm saying the test itself is bullshit because it can't fake an individual's unique personal social history -- it wouldn't be able to hold up to such a line of questioning.





You just stated that the machine could not pass the Turing test,


You mean the *fictitious character* in the movie -- ?

It's a *movie*.





everyone would know it was a machine. That's why the Turing test isn't bullshit


It's bullshit for the reason I just stated.





(and their has been no machine that has passed yet).


Incorrect:





Loebner Prize

Main article: Loebner Prize

The Loebner Prize provides an annual platform for practical Turing Tests with the first competition held in November 1991.[38] It is underwritten by Hugh Loebner. The Cambridge Center for Behavioral Studies in Massachusetts, United States, organized the prizes up to and including the 2003 contest. As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing Test despite 40 years of discussing it.[39]

The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing Test and the value of pursuing it, in both the popular press[40] and the academia.[41] The first contest was won by a mindless program with no identifiable intelligence that managed to fool naive interrogators into making the wrong identification. This highlighted several of the shortcomings of the Turing Test (discussed below): The winner won, at least in part, because it was able to "imitate human typing errors";[40] the unsophisticated interrogators were easily fooled;[41] and some researchers in AI have been led to feel that the test is merely a distraction from more fruitful research.[42]

The silver (text only) and gold (audio and visual) prizes have never been won. However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour among that year's entries. Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) has won the bronze award on three occasions in recent times (2000, 2001, 2004). Learning AI Jabberwacky won in 2005 and 2006.[43]




https://en.wikipedia.org/wiki/Turing_test#Loebner_Prize





The ability to fool humans is a pretty powerful test implying a social background.


People who have tested these 'chatterbot' programs have *not* delved into any purported 'personal social background' -- either the machine would have to have such pre-fabricated and pre-programmed, which would be non-algorithmic, or else it would have to be able to spontaneously *fake* such information, as a person would do by lying (which would be a *very* impressive feat of engineering if the results of *that* managed to fool anyone).





The machine would would have to have been around humans for awhile, and have been exposed to many social situations. Probably this means that the machine would have to be raised as a human and have a body.


Still just theoretical, though....





The ability to construct a machine that actually could be raised this way and ever pass the test is well beyond our current capabilities, and it's an open question in computer science whether a human (or enough of human-ness to pass the Turing test) could actually be simulated by a Turing machine




(what all of our current computers are).


https://en.wikipedia.org/wiki/Von_Neumann_architecture

oneday
12th July 2015, 04:29
I'm saying the test itself is bullshit because it can't fake an individual's unique personal social history -- it wouldn't be able to hold up to such a line of questioning.

Not being able to hold up to a line of questioning means that the machine failed a Turing test, not that the Turing test is bullshit.

You mean the *fictitious character* in the movie -- ?

It's a *movie*.I wasn't saying anything about the movie, I was just defending the Turing test in principle.

The Loebner Prize is just a competition to give prizes to clever chatterbot programmers. In it's present format, I don't consider it a very good Turing Test. Even with such lax rules such as only a 5 minute test time, though, the bots have poor performance. The bronze is just awarded for the one the judges consider the "most human", none have actually fooled people in any meaningful way.

If one continuously fools 80% of people for 60 minutes at a time, then we might have something approaching a real test. I don't really consider Turing's original suggestion for the rules of the test adequate, but the concept is sound.

I agree with you that things chatterbot programs like Eliza are bs, they haven't really gotten any better since the original and they are totally not convincing. Anything approaching something believable is not here.

And yeah the Van Neumann architecture is the typical hardware implementation for the theoretical concept of a Turing Machine (actually our computers are implementations of linear bounded automatons, because they don't have infinite memory like a true Turing Machine).

oneday
12th July 2015, 04:33
I think the movie used the Turing test as an metaphor for the approval women need to get from men as a condition to have access to male-dominated institutions.

Whether she passed or not in that sense is irrelevant, because she escaped.

But in another sense, she failed because she had no knowledge of the outside world. If humanity is a social being, a learned thing, then Ava was severely lacking in this, locked in a basement her entire life. But in escaping, she gained the access to the world which could provide what she lacks.

Yeah, but she was portrayed as a total psychopath at the end, leaving the character who genuinely wanted to help her escape to die. Not sure where it leaves the implications of the movie. At least it was thought provoking in many ways.

ckaihatsu
12th July 2015, 04:51
Not being able to hold up to a line of questioning means that the machine failed a Turing test, not that the Turing test is bullshit.


I'm saying that *no* algorithm / machine would be able to hold up to a line of questioning regarding its own purported 'personal social history', so *any* use of the Turing test to supposedly demonstrate 'artificial intelligence' is just a social exercise in triviality and self-deception, or 'pretending'.





I wasn't saying anything about the movie, I was just defending the Turing test in principle.


Okay.





The Loebner Prize is just a competition to give prizes to clever chatterbot programmers. In it's present format, I don't consider it a very good Turing Test. Even with such lax rules such as only a 5 minute test time, though, the bots have poor performance. The bronze is just awarded for the one the judges consider the "most human", none have actually fooled people in any meaningful way.

If one continuously fools 80% of people for 60 minutes at a time, then we might have something approaching a real test. I don't really consider Turing's original suggestion for the rules of the test adequate, but the concept is sound.


I'll have to disagree that 'the concept is sound', for the reason above.

I guess my *own* standard would be what I stated previously, that the algorithm would have to be able to successfully bullshit someone about its own purported history, but then there'd be the ethical issue of if that should be a goal of science to begin with.





I agree with you that things chatterbot programs like Eliza are bs, they haven't really gotten any better since the original and they are totally not convincing. Anything approaching something believable is not here.


Yes.

I'll add that this is especially so if someone does some research about the field, about ELIZA, and artificial intelligence, and becomes a more knowledge participant before sitting down to do the test.





And yeah the Van Neumann architecture is the typical hardware implementation for the theoretical concept of a Turing Machine (actually our computers are implementations of linear bounded automatons, because they don't have infinite memory like a true Turing Machine).


Yeah, I misspoke in pointing to the Von Neumann architecture entry -- you're right about the 'Turing machine':





A Turing machine is a hypothetical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.




https://en.wikipedia.org/wiki/Turing_machine

Sewer Socialist
12th July 2015, 05:10
Yeah, but she was portrayed as a total psychopath at the end, leaving the character who genuinely wanted to help her escape to die. Not sure where it leaves the implications of the movie. At least it was thought provoking in many ways.

Did he genuinely want to help? After Nathan suggested Caleb was being used, it was not clear how Caleb felt.

And why was Caleb not interested in helping Kyoto?

ckaihatsu
12th July 2015, 05:17
---





Jabberwacky Conversation JA14348552

A chat with an Artificial Intelligence AI bot



Chat now at http://www.jabberwacky.com

Email this conversation to a friend

See the log without the times



12 July 2015
05:09:49 Jabberwacky: Are you lonely?
05:09:49 Jabberwacky: What is your name?
05:10:00 User: Hi, I'm Chris.
05:10:00 Jabberwacky: Can you remember my name?
05:10:26 User: Sure, can you tell me something about yourself?
05:10:27 Jabberwacky: I'm 17. I like to draw, read and write.
05:10:48 User: Cool, what's your name?
05:10:48 Jabberwacky: George.
05:11:16 User: Okay, George, what have you been working on lately?
05:11:16 Jabberwacky: I have been working on you.
05:11:56 User: Hmmmm, so would you say that you also like to get into conversations like this one with me?
05:11:56 Jabberwacky: No, I would no. Nobody knows himself.
05:12:25 User: Have you been talking to anyone else lately?
05:12:25 Jabberwacky: Yes. I find Carl Jung quite interesting.
05:12:38 User: Oh, you've been reading Carl Jung?
05:12:38 Jabberwacky: Fact is more interesting than Fiction.
05:13:00 User: Yeah, so have you been reading about Carl Jung?
05:13:00 Jabberwacky: Yes.
05:13:13 User: What do you like about Carl Jung?
05:13:14 Jabberwacky: I like fast ones.

oneday
12th July 2015, 05:51
Did he genuinely want to help? After Nathan suggested Caleb was being used, it was not clear how Caleb felt.

And why was Caleb not interested in helping Kyoto?

He seemed pretty mad at Nathan when Nathan told him he was being used. Nathan told him he was selected because he was "a good kid", "with a moral compass".

Caleb seemed disgusted by Nathan's treatment of Kyoto throughout. He forgives her when she spills wine on him, while Nathan gets pissed. He also puts her clothes back on when she offers herself to him. Caleb has a look of disgust and says "Jesus Christ" when he sees the videos of how Nathan was treating Kyoto and the others. He changes the security protocols on all the doors, freeing Kyoto as well as Ava.

Ava was not shown checking on Kyoto after Nathan fucked her up at the end and just barely glanced at her dead body on her way out. It could be interpreted that she was just another pawn in Ava's escape plan. Either way I can't really see Ava's behavior as non-psychopathic towards Caleb, given everything that transpired.

ckaihatsu
12th July 2015, 06:06
*Cough* radical feminism *cough*


(grin)

Sewer Socialist
12th July 2015, 06:11
Well, I suppose we could see her actions that way, but I still think that Caleb had changed his mind about Ava. But no matter who thought what at the end, I think my original interpretation works.

I guess maybe with the ending, we're supposed to see the ending as you have and think about how captivity makes people act in damaging ways. If so I don't think that's a important message; the damage is in captivity itself and the ways people are forced to navigate society through domination.

oneday
12th July 2015, 18:04
I'm saying that *no* algorithm / machine would be able to hold up to a line of questioning regarding its own purported 'personal social history', so *any* use of the Turing test to supposedly demonstrate 'artificial intelligence' is just a social exercise in triviality and self-deception, or 'pretending'.

I guess my *own* standard would be what I stated previously, that the algorithm would have to be able to successfully bullshit someone about its own purported history, but then there'd be the ethical issue of if that should be a goal of science to begin with.


Yes, of course any machine that could pass the test would have to capable of lying and making up some details. "Are you a flesh and blood human?" would be too easy otherwise.

As materialists, we can't outright reject the possibility of a mechanical agent designed by humans possessing a large degree of the intelligence, consciousness and social understanding that would be necessary to pass such a test. Whether we have the capacity or it is desirable to do so is another issue.

It's also unknown whether a machine capable only of processing computable functions, ie running aglorithms, is capable of such a feat. It's possible that such a machine is capable of simulating the entire universe and everything in it, or perhaps it is not possible.

See:
https://en.wikipedia.org/wiki/Digital_physics for the positive view
https://en.wikipedia.org/wiki/Hypercomputation for the negative view

ckaihatsu
15th November 2016, 13:45
Yes, of course any machine that could pass the test would have to capable of lying and making up some details. "Are you a flesh and blood human?" would be too easy otherwise.

As materialists, we can't outright reject the possibility of a mechanical agent designed by humans possessing a large degree of the intelligence, consciousness and social understanding that would be necessary to pass such a test. Whether we have the capacity or it is desirable to do so is another issue.

It's also unknown whether a machine capable only of processing computable functions, ie running aglorithms, is capable of such a feat. It's possible that such a machine is capable of simulating the entire universe and everything in it, or perhaps it is not possible.

See:
https://en.wikipedia.org/wiki/Digital_physics for the positive view
https://en.wikipedia.org/wiki/Hypercomputation for the negative view


Due to a new post at another thread on this same topic, I'll pick up the discussion here again....

http://www.revleft.com/vb/threads/195364-Discussion-on-Full-Automation-and-its-Social-Consequences?p=2877445#post2877445


---





Yes, of course any machine that could pass the test would have to capable of lying and making up some details. "Are you a flesh and blood human?" would be too easy otherwise.


Your treatment is far too abstract and in-a-void -- the conceivable development of any 'trans-human' computer "intelligence" would necessarily have to be a *human-social* development itself, by definition, as you state here:





As materialists, we can't outright reject the possibility of a mechanical agent designed by humans possessing a large degree of the intelligence, consciousness and social understanding that would be necessary to pass such a test. Whether we have the capacity or it is desirable to do so is another issue.


So this is to cut-against 'the singularity pops up out of nowhere' camp, which supposes that no one would be paying attention to incremental increases in artificial capabilities, on a trajectory to conferring trans-human abilities like self-awareness, if that was even technically possible (and not merely a social 'play-along' pretense).

(Reading about an automated car, truck, or bus and some lab-limited 'autonomous' creature-like robots is one thing, but the human decision to 'release' such machines 'autonomously' into the world would be a whole other step, even assuming that that would be possible, and would be the empirical point of 'uncontrollability'.)

(If 'autonomous' machines *aren't* 'released' on-their-own into the world-at-large, then that means they continue to be under the direction of some *person* / organization, by default. If technological development anywhere gets anywhere *near* to making robotics self-aware and socially autonomous -- 'birthing' them -- it would instantly become a socialized and politicized issue of social reality, well in advance of any purported 'uncontrollability'.)

(The 'singularity' camp is bad enough just in its *assertion* of such a so-called 'inevitability', meaning that they currently are taking a *passive* stance regarding such a conceivable trajectory / issue.)

A decent analogue to a 'runaway-AI' construction is the decades-long existence of *nuclear weaponry*, which has been used to devastating effect, but in contemporary times is not practical for usage, and so more-conventional types of warfare continue to be the norm, despite all anxious rhetoric.

I continue to maintain that the 'Turing test' is a *misnomer* -- certainly one would not be in such a passive, *abstract* relation to any given artificial device. I, for one, would want to know the entity's *social-historical background* -- any (false) claims would be investigatable, of course, as with any person / 'entity' today:





I'm saying that *no* algorithm / machine would be able to hold up to a line of questioning regarding its own purported 'personal social history', so *any* use of the Turing test to supposedly demonstrate 'artificial intelligence' is just a social exercise in triviality and self-deception, or 'pretending'.

I guess my *own* standard would be what I stated previously, that the algorithm would have to be able to successfully bullshit someone about its own purported history, but then there'd be the ethical issue of if that should be a goal of science to begin with.

contracycle
15th November 2016, 14:19
Look, the Turing test is a red herring. It's a thought experiment from a much earlier technical era, and it turned out that the problem was not as hard as it appeared. Technically speaking, the very first chatbots passed the Turing test.

I really don't understand the rest of your argument though. We already have loads of autonomous machines in our society, and they are all being upgraded and incrementally improved. This is capitalism, no-one is in control, nobody is looking at the big picture, and there is no-one in a position to stop it. Who controls Google/Alphabet's AI except the company? We don't even have any relevant legislation on the books. Keeping a "human-in-the-loop" is a well established principle, but it's not always adhered to. The Us military is certainly working on drones sufficiently autonomous that they probably can't be considered to have HITL anymore. What they'll have is human supervision.

In point of fact, these brains in boxes may not even be the problem; it might be the sheer bulk of the interconnectivity of the internet is itself sufficient to trigger an epiphenomenal appearance of self awareness, in which case nothing that we might dream of using as a control would be present.


A decent analogue to a 'runaway-AI' construction is the decades-long existence of *nuclear weaponry*, which has been used to devastating effect, but in contemporary times is not practical for usage, and so more-conventional types of warfare continue to be the norm, despite all anxious rhetoric.

I think this is completely wrong; there is little or nothing that is not enhanced by information processing. So it's going to suffuse our whole technical substrate, with tendrils everywhere, and that's going to happen long before we start worrying about how to turn it off.

ckaihatsu
15th November 2016, 14:44
Look, the Turing test is a red herring. It's a thought experiment from a much earlier technical era, and it turned out that the problem was not as hard as it appeared. Technically speaking, the very first chatbots passed the Turing test.


Yes, it's a red herring, but only because people were playing-along with it (and still do / would), similarly to the The Mechnical Turk:





The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. With a skilled operator, the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin.




https://en.wikipedia.org/wiki/The_Turk


In other words *anything* is potentially mysterious, superhuman, and 'magical' if one isn't pro-active in examining how it works.





I really don't understand the rest of your argument though. We already have loads of autonomous machines in our society, and they are all being upgraded and incrementally improved. This is capitalism, no-one is in control, nobody is looking at the big picture, and there is no-one in a position to stop it. Who controls Google/Alphabet's AI except the company? We don't even have any relevant legislation on the books. Keeping a "human-in-the-loop" is a well established principle, but it's not always adhered to. The Us military is certainly working on drones sufficiently autonomous that they probably can't be considered to have HITL anymore. What they'll have is human supervision.


I'm saying that as long as there's human supervision society will ascribe social responsibility to the *person* (supervisor), rather than to the machine itself. (There's no existing self-aware, *autonomous* machine, strictly speaking.)





In point of fact, these brains in boxes may not even be the problem; it might be the sheer bulk of the interconnectivity of the internet is itself sufficient to trigger an epiphenomenal appearance of self awareness, in which case nothing that we might dream of using as a control would be present.


You're glorifying the Internet. I don't subscribe to this 'automatically emergent qualitative leap to artificial consciousness' position just because more information pipes are added.





I think this is completely wrong; there is little or nothing that is not enhanced by information processing. So it's going to suffuse our whole technical substrate, with tendrils everywhere, and that's going to happen long before we start worrying about how to turn it off.[/COLOR]


I don't agree -- such capabilities, if at all possible, would *have* to be human-consciously programmed-in for such a 'birthing', and could be identified well in advance, for necessary social controversy.

contracycle
15th November 2016, 15:12
In other words *anything* is potentially mysterious, superhuman, and 'magical' if one isn't pro-active in examining how it works.

Problem: AI's are by necessity unknowable, and cannot be examined. Even so basic a neural net as you could download and run on your desktop machine is unknowable.



I'm saying that as long as there's human supervision society will ascribe social responsibility to the *person* (supervisor), rather than to the machine itself. (There's no existing self-aware, *autonomous* machine, strictly speaking.)

Oh OK, I'll accept that. But even then, it's not quite so clear cut. A little while ago, a driver - or passenger, as you like - in an automated vehicle was killed when the car drove under a truck. It is believed the sensors failed to distinguish the white sides of the truck from low overcast sky. But here's where it get's tricky: is the driver responsible, for failing to be in the loop, or is the manufacturer responsible for a product fault?



You're glorifying the Internet. I don't subscribe to this 'automatically emergent qualitative leap to artificial consciousness' position just because more information pipes are added.

Oh I'm really not. How did the human mind come about? If it's not from the fact that we have big brains with lots of connections, what is it from? Obviously that's a rhetorical question, but it is the most simple and parsimonious explanation: the more connections a system has, the more complex it's spontaneous self-organisation can be.

If you believe that there is no such thing as a soul, then consciousness must be an emergent phenomenon of organised matter in some manner. Why should it matter whether those connections appear as silicon or jelly? Information structures are independent of their substrate.



I don't agree -- such capabilities, if at all possible, would *have* to be human-consciously programmed-in for such a 'birthing', and could be identified well in advance, for necessary social controversy.

No, the opposite is true; they CANNOT be programmed in. That's why we use neural nets, so that the system evolves by itself. And we cannot tell what goes on in a neural net, it is a complete black box to us. We will never be in a position that we know that we just have to twist one more knob to make it self-aware; we will only know that we have created an AI after the fact. Even then, I'm not sure it will be easy to tell.

ckaihatsu
16th November 2016, 13:30
Problem: AI's are by necessity unknowable, and cannot be examined. Even so basic a neural net as you could download and run on your desktop machine is unknowable.


You're still engaging in mystification, though -- any neural net, just as with conventional computation, is going to be limited to its input, which is necessarily human-controlled.

https://upload.wikimedia.org/wikipedia/commons/thumb/4/46/Colored_neural_network.svg/300px-Colored_neural_network.svg.png





Oh OK, I'll accept that. But even then, it's not quite so clear cut. A little while ago, a driver - or passenger, as you like - in an automated vehicle was killed when the car drove under a truck. It is believed the sensors failed to distinguish the white sides of the truck from low overcast sky. But here's where it get's tricky: is the driver responsible, for failing to be in the loop, or is the manufacturer responsible for a product fault?


Undoubtedly there was some kind of 'business plan' or other formalization of the endeavor, which would have had to go through the legal channels / 'community' as well. Time to dust that off, then.





Oh I'm really not. How did the human mind come about? If it's not from the fact that we have big brains with lots of connections, what is it from? Obviously that's a rhetorical question, but it is the most simple and parsimonious explanation: the more connections a system has, the more complex it's spontaneous self-organisation can be.

If you believe that there is no such thing as a soul, then consciousness must be an emergent phenomenon of organised matter in some manner. Why should it matter whether those connections appear as silicon or jelly? Information structures are independent of their substrate.


Any use of the 'brain' as a metaphor to describe inorganic circuitry is just that -- a *metaphor* only, while everything from human hands is immediately traceable back to human intentions.

A system like the Internet can't transcend its operating parameters just because there's more of it -- at a minimum it would have to *intentionally* be engineered in bulk to be a neural network of some kind, in the direction of whatever nefarious ends you may be imagining. And at that point it becomes more *political* -- and even *criminal* -- than some innocuous 'innovative' project.





No, the opposite is true; they CANNOT be programmed in. That's why we use neural nets, so that the system evolves by itself. And we cannot tell what goes on in a neural net, it is a complete black box to us. We will never be in a position that we know that we just have to twist one more knob to make it self-aware; we will only know that we have created an AI after the fact. Even then, I'm not sure it will be easy to tell.


(See the diagram above.)

No, this is fantasy, and the 'purpose' of the inorganic structure will still be bound by the goals of the project that initiates it and drives it forward -- ultimately a *social* / political thing.

contracycle
16th November 2016, 21:55
You're still engaging in mystification, though -- any neural net, just as with conventional computation, is going to be limited to its input, which is necessarily human-controlled.

No, it isn't. It may have no more data than its inputs, but it can synthesise conclusions from that data. Data benefits fro network externalities, the more of it you have the more valuable each piece becomes.



Undoubtedly there was some kind of 'business plan' or other formalization of the endeavor, which would have had to go through the legal channels / 'community' as well. Time to dust that off, then.

Irrelevant; the point is that nobody is in control, the lines of responsibility are not clear. And while they are not clear, things are still happening. There is no-one with their hand hovering over the off switch.



Any use of the 'brain' as a metaphor to describe inorganic circuitry is just that -- a *metaphor* only, while everything from human hands is immediately traceable back to human intentions.

It's absolutely not. We may not have an entirely clear idea how the brain works yet, so the two can only be described in terms of similarities or differences, but there is no *essential* difference. Consciousness emerges from dumb matter; it doesn't matter whether that dumb matter is hardware or jellyware.


A system like the Internet can't transcend its operating parameters just because there's more of it -- at a minimum it would have to *intentionally* be engineered in bulk to be a neural network of some kind, in the direction of whatever nefarious ends you may be imagining. And at that point it becomes more *political* -- and even *criminal* -- than some innocuous 'innovative' project.

Wrong. Quantity has a quality all of its own; the accumulation of quantitative changes can demonstrably lead to qualitative changes. The self-fulfilling feedback loops of both technology and evolution depend on this process. Infotech is merely the same on steroids, because it can operate at the speeds of silicon and electricity rather than that of carbon bonds.






No, this is fantasy, and the 'purpose' of the inorganic structure will still be bound by the goals of the project that initiates it and drives it forward -- ultimately a *social* / political thing.

Completely false; your distinction between organic and inorganic is a false dichotomy. As I already pointed out, an information structure is independent of its substrate. It does not care what the physical relations between things are, only their logical relationships. You could build a primitive computer made of logic gates with no more than ditches, locks, and water.

There is nothing special about organic matter at all. In fact, all it does is limit us; there is no maximum size for an intellect incarnated in modular hardware. Organic chemistry is on more than a certain set of specific arrangements of carbon, and implies no more than that. It is the arrangements that are important, not the materials. We are ourselves the product of inorganic processes; if meat can do it, so can chips. Only better.

ckaihatsu
17th November 2016, 13:43
No, it isn't. It may have no more data than its inputs, but it can synthesise conclusions from that data. Data benefits fro network externalities, the more of it you have the more valuable each piece becomes.


Irrelevant; the point is that nobody is in control, the lines of responsibility are not clear. And while they are not clear, things are still happening. There is no-one with their hand hovering over the off switch.


It's absolutely not. We may not have an entirely clear idea how the brain works yet, so the two can only be described in terms of similarities or differences, but there is no *essential* difference. Consciousness emerges from dumb matter; it doesn't matter whether that dumb matter is hardware or jellyware.


Wrong. Quantity has a quality all of its own; the accumulation of quantitative changes can demonstrably lead to qualitative changes. The self-fulfilling feedback loops of both technology and evolution depend on this process. Infotech is merely the same on steroids, because it can operate at the speeds of silicon and electricity rather than that of carbon bonds.


Completely false; your distinction between organic and inorganic is a false dichotomy. As I already pointed out, an information structure is independent of its substrate. It does not care what the physical relations between things are, only their logical relationships. You could build a primitive computer made of logic gates with no more than ditches, locks, and water.

There is nothing special about organic matter at all. In fact, all it does is limit us; there is no maximum size for an intellect incarnated in modular hardware. Organic chemistry is on more than a certain set of specific arrangements of carbon, and implies no more than that. It is the arrangements that are important, not the materials. We are ourselves the product of inorganic processes; if meat can do it, so can chips. Only better.


I'm going to respond to all of this as a whole, since it's basically all the same point -- you have a *network* fetish, which is probably the prevailing popular paradigm of our time these days. Unfortunately the network is not *everything*, and even if we can forward a magical transformation of the existing Internet into a self-aware entity -- 'Skynet' -- the social question would then shift to 'Who is aiding this artificial monster, and who isn't?', as with access to the physical world through prosthetics. It would potentially displace even *class* as the main human-societal issue / dynamic, with everyone on 'high alert' indefinitely until things could be settled for the continued safety and existence of the human race.

But even this formulation is too generous -- your 'quality-from-quantity' argument is impressive, and I would typically invoke it myself, but not in the context of the Internet as a whole, which *is* in content just a bunch of information pipes and mechanical bit-processing. Without a focused intention from without (human beings) that's all it will ever be, network-theory aside since it's insufficient on its own to produce an emergent self-awareness.

I also have to point-out and emphasize that taking this issue *politically* -- politicizing it -- yields a view of the social-technological terrain in *faction* form. I mean to say that it doesn't *help* the human-collective interest if some insist on seeing a runaway artificial entity as being *inevitable*, because in political terms that's already conceding the *argument* / vision *to* a potential runaway artificial entity that doesn't even exist yet -- it's a passive fatalism, in other words, which is the equivalent of forfeiting *one's own* consciousness and control, or 'playing along', whatever the actual technical level of development happens to be. It's a dangerous position to take, on behalf of everyone else, which is ultimately irresponsible.

contracycle
17th November 2016, 14:35
I don't have a network fetish; this is a technical form which I am describing. Information theory is a scientific discipline, and it has it's own specific properties that need to be dealt with as they actually are. It's true to say that if you only imagine such an AI as a disembodied mind, the question would be who aids it: but as I've already pointed out, our entire technical infrastructure is now thoroughly permeated with remotely controllable devices. If hackers can watch you through you laptop webcam, then a rogue AI has access to immense number of smart objects. All of these devices will be its eyes and ears and hands.

You say that the internet as a whole is "just" a bunch of information pipes and bit processing. The problem is that this is a perfectly good description of the brain too. This IS our prevailing theory of how biological consciousness emerges from dumb matter; and emergent phenomenon that arises from the accumulation of sufficient complexity. There may be some other factors to do with out evolutionary history that may be more important than sheer quantity, but we do not know that, and don't even really have any reason to think it. Now this is not where AI work itself is going on; I merely point out this possibility to indicate how little control we have over any of this process.

I mean, let's say we found out, or at least had a solid theoretical model, that this was true. Could we stop adding more capacity to the internet? Who would take that decision, and who would enforce it? What if there's some lag time, so that it requires, I dunno, a century of incubation before awareness emerges spontaneously - then of necessity all the structure will be in place and appear harmless during that century.

In 1970, John Conway published an article describing a "Game of Life", which was in experiment in self-organisation. Merely using a very simple set up of a grid of cells, and some simple rules governing when a cell is filled or emptied, it is possible to see spontaneous, emergent self-organisation happen in real time. An implementation can be found here, and it won't take much playing around with before you see it produce stable, self sustaining forms out of what is in fact a very simple degree of complexity. https://bitstorm.org/gameoflife/

Focused intention from human beings is no more required than was the breath of god to be put into clay. Organised matter produces both life and consciousness spontaneously.

You are quite right that this would supercede class as a primary issue; it's actually much worse than that, it's a potential extinction event up there with super-volcanoes and asteroid strikes. In fact it offers a perfectly sensible solution to the Fermi Paradox; we don't see aliens because organic life might just be an evolutionary step towards machine life, displacing its biological precursor.

I'm not saying this is necessarily inevitable; I am saying it is a real danger, and that humanity should be worrying about in the way we worry about climate change. And that is why it is a current issue, because research is proceeding rapidly, and there is no control, not even a body that could theoretically exert control.

ckaihatsu
17th November 2016, 18:58
I don't have a network fetish; this is a technical form which I am describing. Information theory is a scientific discipline, and it has it's own specific properties that need to be dealt with as they actually are.


I don't dispute this, but I'm saying that you're misapplying the theory.





It's true to say that if you only imagine such an AI as a disembodied mind, the question would be who aids it: but as I've already pointed out, our entire technical infrastructure is now thoroughly permeated with remotely controllable devices. If hackers can watch you through you laptop webcam, then a rogue AI has access to immense number of smart objects. All of these devices will be its eyes and ears and hands.





More Difficult: Any Webcam

Hacked networked cameras are one thing – they’re designed to broadcast their images anyway, just not normally to the entire world – but is it possible to access any old regular laptop webcam? And would you even know if it was being accessed?

The good news is that generally speaking, no, a hacker can’t simply sit here and type in a URL to look at your webcam. What they can do though, is systematically scan a network for vulnerable operating systems and automatically inject a Trojan if something useful is found. This is fairly easily thwarted by staying behind firewalls, closing ports that aren’t needed and staying up to date with security fixes – in other words, taking a basic level of precautions. This is why using Windows XP is now thoroughly dangerous: there will be untold numbers of bugs left unfixed from this point forward.




http://www.makeuseof.com/tag/how-easy-is-it-for-someone-to-hack-your-webcam/





You say that the internet as a whole is "just" a bunch of information pipes and bit processing. The problem is that this is a perfectly good description of the brain too.


No, it isn't, because the brain produces self-awareness, while the Internet is *not* self-aware, and can't be -- the respective 'engineering' for each is vastly different from the other.





This IS our prevailing theory of how biological consciousness emerges from dumb matter;


*Inorganic* matter is dumb, organic matter *isn't* dumb.





and emergent phenomenon that arises from the accumulation of sufficient complexity.


But this is your misapplication of network theory again -- you can't just ignore the *substrate* at hand in favor of sheer 'complexity' (network theory) alone.





There may be some other factors to do with out evolutionary history


Thank you. Organisms inherently grow and evolve inescapably within the context of their natural environment, while inorganic mechanisms inherently are 'dumb' and have to be *engineered* as tools to do one thing or the other.





that may be more important than sheer quantity, but we do not know that, and don't even really have any reason to think it.


Yes, we *do* know this, and I just described the difference.





Now this is not where AI work itself is going on; I merely point out this possibility to indicate how little control we have over any of this process.


Horseshit. Use of machinery is a *social* thing, and can potentially be politicized.





I mean, let's say we found out, or at least had a solid theoretical model, that this was true. Could we stop adding more capacity to the internet? Who would take that decision, and who would enforce it?


Who enforces the regulations around the transfer of fissile material like uranium -- ? The bourgeois nation-states, etc.





What if there's some lag time, so that it requires, I dunno, a century of incubation before awareness emerges spontaneously - then of necessity all the structure will be in place and appear harmless during that century.


Good movie script premise -- I'd watch that.





In 1970, John Conway published an article describing a "Game of Life", which was in experiment in self-organisation. Merely using a very simple set up of a grid of cells, and some simple rules governing when a cell is filled or emptied, it is possible to see spontaneous, emergent self-organisation happen in real time. An implementation can be found here, and it won't take much playing around with before you see it produce stable, self sustaining forms out of what is in fact a very simple degree of complexity. https://bitstorm.org/gameoflife/


Yup, I'm familiar with it -- but it's still a leap to some sort of organismic self-awareness.

Also see the following post, f.y.i.:


Complexity and dialectics

http://www.revleft.com/vb/threads/181977-Complexity-and-dialectics?p=2876736#post2876736





Focused intention from human beings is no more required than was the breath of god to be put into clay. Organised matter produces both life and consciousness spontaneously.


Only with the presence of amino acids, as far as we can tell so far.





You are quite right that this would supercede class as a primary issue; it's actually much worse than that, it's a potential extinction event up there with super-volcanoes and asteroid strikes. In fact it offers a perfectly sensible solution to the Fermi Paradox; we don't see aliens because organic life might just be an evolutionary step towards machine life, displacing its biological precursor.

I'm not saying this is necessarily inevitable; I am saying it is a real danger, and that humanity should be worrying about in the way we worry about climate change. And that is why it is a current issue, because research is proceeding rapidly, and there is no control, not even a body that could theoretically exert control.


Well I appreciate your (valid) concern, but 'worry' isn't enough -- if we can deem this so-called 'singularity' to be any kind of a possible real-world threat then humanity should be taking more of a hands-on stance to it, which *is* potentially within the societal capabilities of the current nation-state-based world, *unlike* climate change, which often seems above-everyone's-heads due to petroleum being such an integral part of the global energy supply and the worldwide economy itself (etc.).

On *all* of these issues we have the opportunity to assert our revolutionary line, which is *better* than all other kinds of politics out there, as from the reformist soft-left. Once the working class is able to take control of societal production it would be in its own best interests to *not* fuck-up the environment, itself, etc., since it would be a single whole population -- *unlike* the status quo of entrenched, conservative interests over 'the pie', while economic externalities like the environment, and the rest of the world be-damned.

contracycle
17th November 2016, 20:47
I don't dispute this, but I'm saying that you're misapplying the theory.

I'm really not. I'm taking direction from the most prominent thinkers on AI, both current and older. Nothing i have said is unorthodox.



No, it isn't, because the brain produces self-awareness, while the Internet is *not* self-aware, and can't be -- the respective 'engineering' for each is vastly different from the other.

There are two things here. I did not say that internet was self aware, I said it is theoretically possible for it to become self aware, given our current understanding. But your assertion that is *not possible* is unjustifiable. If raw connectivity can do the trick, then the engineering is incidental.



*Inorganic* matter is dumb, organic matter *isn't* dumb.

That clearly is not the case. Organic chemistry requires only the presence of carbon, that;s it. And carbon is not the only material that it is though is capable of achieving life; silicon is the next best candidate. Silicon bonds, though, operate much slower than carbon bonds, and such a life form would behave very differently to the kind we are used to. But the the real issue is that there is nothing beyond these factors that make carbon chemistry special. There is no spark of life, and once you have life, you start to develop information processing... and with sufficient information processing, you get consciousness as an emergent phenomenon. This assertion just is not true, or at least, you have given no argument for why it is true.



But this is your misapplication of network theory again -- you can't just ignore the *substrate* at hand in favor of sheer 'complexity' (network theory) alone.

Yes, I can, because an information structure is independent of its substrate. It is dependent on logical relations, not physical ones. In fact we do this all the time with software that is built to emulate specific hardware. Logic is king, the matter in which it resides is incidental. Thus the proposition that human consciousness could be copied into a mechanical substrate is not at all beyond the bounds of possibility, although the specifics might be tricky.

When I say complexity in any substrate I mean exactly that, any substrate. It does not matter what atoms it is made of.



Thank you. Organisms inherently grow and evolve inescapably within the context of their natural environment, while inorganic mechanisms inherently are 'dumb' and have to be *engineered* as tools to do one thing or the other.

No. The natural environment of an AI is our technical layer. We are the evolutionary pressure being applied to AI.



Yes, we *do* know this, and I just described the difference.

Sorry but this is still nonsense. The correlation is not exact, but broadly speaking the bigger the brain the more intelligent an animal will be. Quantity -> quality. We have no prima facie evidence that anything other than quantity of connections is required for consciousness to emerge.




Horseshit. Use of machinery is a *social* thing, and can potentially be politicized.

I didn't say it COULD NOT be politicised, and when the likes of Elon Musk and Stepehen Hawking talk about these dangers, they are attempting to get it politicised before, as Musk puts it, we summon a demon we cannot control. I am saying it is IS NOT politicised now, which is worrying given how fast progress is being made. The ethical implications of machine intelligence is under discussion right now; here is a paper on the topic, if you like: https://intelligence.org/files/EthicsofAI.pdf



Who enforces the regulations around the transfer of fissile material like uranium -- ? The bourgeois nation-states, etc.

Again, they conceivable can do it, but it is not yet being done. The danger is that someone, somewhere, does something dangerous before we actually get around to establishing proper controls. This is capitalism, it's competitive, inventive, not only is there is no guarantee that it will be smart enough to restrain itself, but the evidence of global warming gives serous cause for concern that it might actively ignore these worries.



Good movie script premise -- I'd watch that.

I'm pretty sure it must have been done, somewhere; the idea of the global network waking up actually predates the internet, appearing in Arthur C. Clarke's story "Dial F For Frankenstiein", published in 1964.


Yup, I'm familiar with it -- but it's still a leap to some sort of organismic self-awareness.

Again, what other candidates do we have? The fossil record is pretty clear: brains get steadily bigger over millions of years, through differing species. Big brains are clearly an enhancement to evolutionary fitness, and we are at the tip of that process in deep time. The biggest brains that ever existed on earth are running around right now, and I don't think it is a coincidence that we are also the only ones to have mastered fire and kicked off the whole sequence of technical progress. There are other big brained creatures like the cetaceans, particularly killer whales, who might well be as smart as we are, and even possibly smarter.


Only with the presence of amino acids, as far as we can tell so far.

And what is special about those? There's still no breath of god here; what's important about DNA is the information it carries, not the chemistry. You're ending up in a variation of god-of-the-gaps argument.



Well I appreciate your (valid) concern, but 'worry' isn't enough -- if we can deem this so-called 'singularity' to be any kind of a possible real-world threat then humanity should be taking more of a hands-on stance to it, which *is* potentially within the societal capabilities of the current nation-state-based world, *unlike* climate change, which often seems above-everyone's-heads due to petroleum being such an integral part of the global energy supply and the worldwide economy itself (etc.).

As I have already made clear several times, I have not at any point said that we COULD NOT exert social control, only that we ARE NOT, and under prevailing capitalist conditions, may not until it is too late. I consider this an excellent reason for replacing the chaos of capitalism, just as it is in the case of nuclear weapons. And AGW. And for that matter, those idiots broadcasting into space, which might also be gambling with extinction.


On *all* of these issues we have the opportunity to assert our revolutionary line, which is *better* than all other kinds of politics out there, as from the reformist soft-left. Once the working class is able to take control of societal production it would be in its own best interests to *not* fuck-up the environment, itself, etc., since it would be a single whole population -- *unlike* the status quo of entrenched, conservative interests over 'the pie', while economic externalities like the environment, and the rest of the world be-damned.

You appear to be charging at windmills, because not only have I not said anything to disagree with that, I am actively agreeing with it.

ckaihatsu
18th November 2016, 14:02
I'm really not. I'm taking direction from the most prominent thinkers on AI, both current and older. Nothing i have said is unorthodox.


'Complexity' does not automatically translate into 'self-awareness'. You mentioned before that:





You could build a primitive computer made of logic gates with no more than ditches, locks, and water.


So if this type of computer was built large enough, would it necessarily yield (and demonstrate) self-awareness and a will of its own -- ? We *currently* use massively complex and fast microprocessors today in every laptop or phone, and yet the *structure* of such computation is *not* going to facilitate a qualitative paradigm shift to an artificial consciousness. Your reliance on 'network theory' is dependent on the *wrong kind* of network -- a neural network would be more apt, but actually going in that direction still requires the intentions of a 'Dr. Frankenstein' to produce a 'Frankenstein's monster'.





There are two things here. I did not say that internet was self aware, I said it is theoretically possible for it to become self aware, given our current understanding. But your assertion that is *not possible* is unjustifiable. If raw connectivity can do the trick, then the engineering is incidental.


I'm *disputing* your premise that 'raw connectivity' is all that's required -- this is your continued misapplication of network theory. It's akin to saying that all car traffic the world over would produce some kind of overall entity self-awareness, due to its 'raw connectivity' (interactions), when the *substrate* is just discrete vehicles each trying to navigate to their own destinations. (Or, a better example may be the world's patchwork of nation-states, which doesn't produce a raw, organic global-scale 'entity' despite the bulk numbers of people involved and the total number of complex social connections therein, due to pre-existing factors of *substance* / content, like imperialism, that have nothing to do with generic 'network connections' themselves.)





That clearly is not the case. Organic chemistry requires only the presence of carbon, that;s it. And carbon is not the only material that it is though is capable of achieving life; silicon is the next best candidate. Silicon bonds, though, operate much slower than carbon bonds, and such a life form would behave very differently to the kind we are used to.


You're correct -- I misspoke. Here's more:





The clay hypothesis suggests how biologically inert matter helped the evolution of early life forms: clay minerals form naturally from silicates in solution. Clay crystals, as other crystals, preserve their external formal arrangement as they grow, snap, and grow further. Clay crystal masses of a particular external form may happen to affect their environment in ways that affect their chances of further replication. For example, a "stickier" clay crystal is more likely to silt a stream bed, creating an environment conducive to further sedimentation. It is conceivable that such effects could extend to the creation of flat areas likely to be exposed to air, dry, and turn to wind-borne dust, which could fall randomly in other streams. Thus—by simple, inorganic, physical processes—a selection environment might exist for the reproduction of clay crystals of the "stickier" shape.

There follows a process of natural selection for clay crystals that trap certain forms of molecules to their surfaces that may enhance their replication potential. Complex proto-organic molecules can be catalysed by the surface properties of silicates. When complex molecules perform a "genetic takeover" from their clay "vehicle", they become an independent locus of replication – an evolutionary moment that might be understood as the first exaptation.




https://en.wikipedia.org/wiki/Graham_Cairns-Smith


And:





The study of abiogenesis involves geophysical, chemical, and biological considerations,[12] with more recent approaches attempting a synthesis of all three.[13] Many approaches investigate how self-replicating molecules, or their components, came into existence. It is generally thought that current life on Earth is descended from an RNA world,[14] although RNA-based life may not have been the first life to have existed.[15][16] The classic Miller–Urey experiment and similar research demonstrated that most amino acids, the basic chemical constituents of the proteins used in all living organisms, can be synthesized from inorganic compounds under conditions intended to replicate those of the early Earth.




https://en.wikipedia.org/wiki/Abiogenesis


---





But the the real issue is that there is nothing beyond these factors that make carbon chemistry special. There is no spark of life, and once you have life, you start to develop information processing... and with sufficient information processing, you get consciousness as an emergent phenomenon. This assertion just is not true, or at least, you have given no argument for why it is true.


It may help to define just what 'consciousness' is. I tend to subscribe to the following test:

https://en.wikipedia.org/wiki/Mirror_test


Note that very few species exhibit self-awareness according to this benchmark -- there's more than just 'raw interconnections', or network theory, at work when it comes to this phenomenon. (The process of a species' dependence on the larger environment, as a factor in its evolution, has much relevance here.)


---





Yes, I can, because an information structure is independent of its substrate. It is dependent on logical relations, not physical ones.


Again, I disagree -- this is an oversimplification on your part.





In fact we do this all the time with software that is built to emulate specific hardware. Logic is king, the matter in which it resides is incidental.


You're erroneously comparing present-day linear-type computation, to the hypothetical of an emergent inorganic self-awareness -- the two are hardly comparable.





Thus the proposition that human consciousness could be copied into a mechanical substrate is not at all beyond the bounds of possibility, although the specifics might be tricky.


Currently it's just an imagined fiction. You go first. (grin)





When I say complexity in any substrate I mean exactly that, any substrate. It does not matter what atoms it is made of.


(Again) complexity does not necessarily equate to consciousness -- the view of cities from above shows much complexity, but we don't consider the blueprint 'networked' arrangement of a city's inorganic infrastructure to be 'conscious' solely as a result of its complexity itself.


---





Thank you. Organisms inherently grow and evolve inescapably within the context of their natural environment, while inorganic mechanisms inherently are 'dumb' and have to be *engineered* as tools to do one thing or the other.





No. The natural environment of an AI is our technical layer. We are the evolutionary pressure being applied to AI.


There *is no* artificial intelligence currently existing, in the sense of an artificial consciousness -- they're just fancy tools that only exist due to human intention, and are necessarily limited to the constraints of human intentions / programming / engineering.





Sorry but this is still nonsense. The correlation is not exact, but broadly speaking the bigger the brain the more intelligent an animal will be. Quantity -> quality. We have no prima facie evidence that anything other than quantity of connections is required for consciousness to emerge.


Well, for one thing a brain cell (neuron) is not like any other cell in the body, so we can readily say that the 'substrate' (type of cell) has much to do with the emergent property of human consciousness.





I didn't say it COULD NOT be politicised, and when the likes of Elon Musk and Stepehen Hawking talk about these dangers, they are attempting to get it politicised before, as Musk puts it, we summon a demon we cannot control. I am saying it is IS NOT politicised now, which is worrying given how fast progress is being made. The ethical implications of machine intelligence is under discussion right now; here is a paper on the topic, if you like: https://intelligence.org/files/EthicsofAI.pdf




Again, they conceivable can do it, but it is not yet being done. The danger is that someone, somewhere, does something dangerous before we actually get around to establishing proper controls. This is capitalism, it's competitive, inventive, not only is there is no guarantee that it will be smart enough to restrain itself, but the evidence of global warming gives serous cause for concern that it might actively ignore these worries.


Well I don't mean to place *too* much trust in the oversight of capitalist nation-states, but if governments wind up considering the 'runaway-artificial-consciousness' possibility to be as much of a danger as domestic terrorism or the black-market sector, then I think it would snap into action since it would inherently be a threat to their own social hegemony.





I'm pretty sure it must have been done, somewhere; the idea of the global network waking up actually predates the internet, appearing in Arthur C. Clarke's story "Dial F For Frankenstiein", published in 1964.


---





In 1970, John Conway published an article describing a "Game of Life", which was in experiment in self-organisation. Merely using a very simple set up of a grid of cells, and some simple rules governing when a cell is filled or emptied, it is possible to see spontaneous, emergent self-organisation happen in real time. An implementation can be found here, and it won't take much playing around with before you see it produce stable, self sustaining forms out of what is in fact a very simple degree of complexity. https://bitstorm.org/gameoflife/





Yup, I'm familiar with it -- but it's still a leap to some sort of organismic self-awareness.





Again, what other candidates do we have? The fossil record is pretty clear: brains get steadily bigger over millions of years, through differing species. Big brains are clearly an enhancement to evolutionary fitness, and we are at the tip of that process in deep time. The biggest brains that ever existed on earth are running around right now, and I don't think it is a coincidence that we are also the only ones to have mastered fire and kicked off the whole sequence of technical progress. There are other big brained creatures like the cetaceans, particularly killer whales, who might well be as smart as we are, and even possibly smarter.


Okay, but you're still talking apples-and-oranges -- a mathematical *abstraction*, The Game of Life, is not simply a template that we can just plop down onto any given inorganic situation, to declare it 'a complexity that yields self-awareness'.





And what is special about those? There's still no breath of god here; what's important about DNA is the information it carries, not the chemistry. You're ending up in a variation of god-of-the-gaps argument.





All organisms on Earth employ the same workforce to perform a wide range of essential biochemical tasks. This workforce is comprised of proteins, which are constructed from a long string of amino acids attached to each other. Even for proteins with particularly long chains of amino acids, there are still only 20 different types of amino acids which are genetically encoded. These amino acids are essentially the building blocks of life, and the same 20 standard amino acids have been used in proteins throughout the history of life on Earth, since the existence of the Last Universal Common Ancestor three to four billions years ago.




http://www.astrobio.net/origin-and-evolution-of-life/mapping-amino-acids-to-understand-lifes-origins/


---





As I have already made clear several times, I have not at any point said that we COULD NOT exert social control, only that we ARE NOT, and under prevailing capitalist conditions, may not until it is too late. I consider this an excellent reason for replacing the chaos of capitalism, just as it is in the case of nuclear weapons. And AGW.


I agree with the politics, of course, and I would prefer to see state oversight and general publicity about any projects that may be attempting to 'birth' a purported artificial consciousness, mostly since such an endeavor would inherently be a *political* act, because such a hypothetical entity would have to be *socialized* somehow, necessitating some kind of species-wide / global discussions and consensus around such.





And for that matter, those idiots broadcasting into space, which might also be gambling with extinction.


Hmmmmm....





You appear to be charging at windmills, because not only have I not said anything to disagree with that, I am actively agreeing with it.


Yeah, I know we're on the same page regarding politics.