Log in

View Full Version : Post-Science: Virtual Debate



Dermezel
7th March 2010, 05:22
This is not necessarily Marxist, however it comes from a Dialectical Materialist perspective, and does show how much more limited bourgeoisie/positivism is in comparison.

Note, unlike reactionary bourgeoisie ideologies like postmodernism, or any sort of neo-mysticism (which is not applicable to all postmodernism, but much see Empire (http://en.wikipedia.org/wiki/Empire_%28book%29) where the postmodern Marxists themselves state that much of postmodernism is a bourgeoisie philosophy). Also existentialism (Sartre himself calls existentialism a parasitical system (http://www.marxists.org/reference/archive/sartre/works/critic/sartre1.htm) ).

Post-Scientific methods is, as I present it, not an entirely ideological development but a technologically driven development arising from real technical innovations. All takes place within the context of Dialectical Materialism. It is a revolutionary cultural development arising from AI, Cybernetics and the Internet.

The fictional debate opens with an AI moderator:


What post science is:

The origin of the term is debatable, but roughly ten years ago the term "post-science" became mainstream among net circles. At the time, it was not known what it was exactly. The loose definition was post-science is what post-..scientists practice. Now it appears to have more concrete forms, though these forms still defy any exact definition. AI research, Net research, swarm-..analysis, and cybernetic super-thought all seem indicative of a "New Method" that differs as radically from the old as Francis Bacon's own did that of previous Aristotelian scholastics.

But is this veneer? Can science become obsolete, or is it some sort of eternal truth?

Many will argue, especially in the scientific community itself, that these new methods are nothing but science evolving. Perhaps evolving in strange and unexpected directions, but science none the less. They will argue that the essence of science is various elements which must be present in any working, empirical method.

But what of methods that lack falsifiability,.. peer review, paradigms, and at times, even observation? Can these be considered true science? I open this debate with the pro-side:

Thesis of a Net Synthesizer: Consider a group of deities. These deities are able to, by means of supernatural forces, and an intelligence far beyond that of any human, beam down knowledge they attain from the universe directly into the human mind. Or, not even knowledge, but like Prometheus they give us tools directly, without us knowing how they work at all. We just know they work because we can see them, the rest of what the deity says you have to accept by virtue of its effectiveness. Would that be science?

Would getting mystical knowledge from a deity be science?

Now replace Deity with AI, and mystical with cybernetics. What is the difference?

On the one hand you have a deity using supernatural forces to beam knowledge directly into men's minds. On the other, an advanced AI with a level of thought surpassing man's, at least in specific and regional fields, to the point where early tribal and agricultural people would consider it a deity. And instead of beaming in mystical knowledge, it uses the net to upload it directly into a cyberbrain.

We know that the new Mega-class AI's of modern day corporations and militaries work. But sometimes we don't even know how they work. Sometimes the owners themselves do not.

Take for example the argument of the fatty fingers problem, a hard problem scientists today say is impossible to solve. One company specializing in nanotechnology made an AI to solve this problem specifically. 5 years later the company created nanotech devices which broke this hard theoretical limit. How?

No one knows. The company keeps any information by the AI a secret for copryright and trade mark purposes. But there are sources that indicate the top scientists in the companies don't themselves know by means of internal leaks. The AI's knowledge of the problem is just so beyond a human's, even a cybermind's, that they cannot understand how it came to these conclusions. All they know is that if they follow specific instructions, they get certain results.

Perhaps this is stupidity or deity. Perhaps the result is merely cosmetic. Perhaps so much of this is made up, the corporate brass knows exactly how the AI researched it, or a human researched it and they just say it is an AI. Perhaps.

But what we do know is that the fatty fingers barrier has been bypassed. And what we must also do is prepare ourselves for the possibility that it is true.

Now is that science? I would argue no. Scientists do not sit around a guru who passes down knowledge they do not understand because the results create profitable technologies. They do not do it by keeping the knowledge a secret.

And the fact is we don't have any idea of how this AI did it. From what I've seen, it may not have even used observation and testing.

Consider how it could be, and there is reason to suspect as much, an almost purely deductive program. A deductive program using the net and synthesizing knowledge.

I myself as a Net Synthesizer find this all too likely. As many know an increasing amount of research is not being attained by what many scientists call "real research" but by finding patterns on the net not yet noticed. The process is simple, you look at discovered principles or patterns in field A, say architecture, and apply them to field B, say biology. This actually happened.

One Net Synthesizer revolutionized how we classify organisms by this manner. Simply put, the current biological taxanmy is based on the clade system. This is how scientists organize the evolutionary points of species. But the fact was this was always done in 2-D. Simply because biologists are not trained to think in three dimensions, and find it impractical to model in such a format.

But architectural design has dealt with 3-d modeling for centuries. It was simply a matter of applying what was known in architecture to biological categorization... This innovation, or discovery, whatever you want to call it, made it so biological models were far more efficient and accurate. It gives a much truer picture of evolution because life doesn't just evolve in a straight line of ancestors/..descendents, but splits into cousins, who share multiple ancestors, and branch into multiple descendents- all of which may continue to co-exist at the same time. If fact one ancesteral species could give birth to a new branch tens or hundreds of thousands of years after the old branch has itself already split in such a manner. Making such 2-d is difficult, but in 3-d it is simple and easy.

But it doesn't end there. This revolution you see begat several others, in several other fields and in rapid successions. Because architecture itself didn't stand still after the discovery of 3-d modeling, this way of modeling was itself find tuned and revolutionized over time. So it wasn't like 3-d modeling was brought to the table- but several generations of 3-d modeling. Super-advanced 3-d modeling vs. 2-d modeling.

But is that science? The person did not really make a hypothesis or experiments or field observations. They just took one piece of knowledge and applied it from one field to another, it was far more deductive then inductive. And yet it was far more synthetic then analytic.

Synthetic 'a priori' knowledge, that elusive field which Kant so desperately wanted to find.

Now I believe this is how Nanotech's AI works, because there are no records of any experiments, and because most AI now at days are geared towards net research- not experimentation... Net research is often many times cheaper, and get's more results, and faster. It's much easier to look at what somebody already spent lots of time and effort to study, and compare data, then to construct these expensive experiments on your own.

But even if the opposite is true, the AI did make its own experiments and observations, they are not verifiable or peer reviewed. There is no social element to it, because there is no bias, at least not in the way we typically think of bias. That is why Nanotech's CEOs don't care if they can understand what the AI is doing- the AI is getting results.

This is only one example of the various new methods we are seeing now at days, and they are as different from science as science is from philosophy or mathematics. Sure, they may share some very, very general principles. After math and science both require "thinking". Philosophy and science both require "questioning". And in a sense science and post-science both require data, observation, and verification in a very vague sense. But these are all of a qualitatively different kind. Verification in science means the experiment has been peer reviewed, and replicated, and is, to quote the revisionist Karl Popper "generally liable to being falsified in a practical sense. " These new discoveries, made by AI, Cyberminds and Net Researchers of various types, ranging from mere programs to baseline humans, have no need of such verification or peer review. Verification exists in the results, or in simple deduction.

And with the improvements in VR, 3-D printing and augmented reality this will only accelerate.

Given this reality I find it ironic that modern day scientists, so long the champions of progress, have adopted such a reactionary, and traditionalist tone. Perhaps they now know the feeling of Rationalist philosophers so many centuries ago.

Response from a Physicist:

"The speaker's faith in technology is amazing. And we scientists are not being reactionary and traditonalist, in fact scientists are known for challenging tradition throughout history. The idea that we are being traditionalist is absurd!

Just ask Galileo about how scientists are traditionalist.

Anyways I'd like to point out that in so many ways you are not qualified to make half the assertions you make. My peers and I have spent decades studying these subjects. You "Net Synthesizers" as you call yourself, basically just internet geeks, are on average in your 20s and 30s. I bet you don't even have a degree. Do you know what a degree is? A degree means you have studied a subject for at LEAST ten years. That you have been questioned on it. Taken tests. Wrote a thesis that contributed something new to the field.

But that's not my point. My point is this, have you ever heard of the story of the Spider and the Bee? It is a pseudo-aesop fable written by Jonathan Swift, a man before your time who you have likely only heard of by a Net article, but never actually read. Well the story goes into how the Bee collects knowledge, adds to it, and stores in, building something greater. The Spider, by contrast, only parasites on it all the while considering himself a more "independent" thinker. In reality all he does is steal.

That is what your Net "research" largely consists of- stealing, not contributing. That is all Post-Science, as you self-describe it, comes down to- stealing knowledge. It is the philosophy of entitlement writ epistemic method.

And you talk of all this new technology, cybernetics, AI, smart swarms, and augmented/..virtual realities. But where did these new technologies come from? Science.

This isn't the magic of deities like your FALSE ANALOGY pretends. It is something tangible, something we can explain down to the very atom. THAT is the difference between some mystical insight stemming from a deity and these so-called "New Methods".

Last I finish by asking the audience to consider how much science has contributed and will continue to contribute. Science has explained the origin of life and humanity. It has explained the very nature of the universe and stars. How the human mind works. It has sent us into outer space, and yes, even developed cybernetics and robotics.

By contrast what have these fads given us? Mere gadgets. Sooner or later what they parasite will hit a dead end, because all the real knowledge they rely on will be exhausted.

I would like to end this speech with a quote by the great physicist, and my personal hero, Albert Einstein: "'All our science, measured against reality, is primitive and childlike-and yet it is the most precious thing we have.' "

Closing Arguments and Questions:

NS: I expected better arguments from my opponent aside from ad hominems, but I suppose he is desperate.

He is desperate because while he may claim we post-scientists are the parasites, science, the peer reviewed, hypoythesis testing, empirically bound enterprise he engages in, and has a vested interest in, due to the amount of time he studied for it, as is evident by his mention of his "degree" is largely relegated to academia. It is mostly paid for with tax money.

That is because corporations no longer find it very profitable. Why spend ten years funding a line of research, when net synthesis and an AI program can deliver better results, faster, and cheaper?

To explain, most innovations have been shown to be horizontal, not vertical. Most of the revolutionary discoveries that really lead to technological breakthroughs are found by taking what is learned in one field, and applying it to another.

So why spend exponentially more in hypothesis testing, which must be "peer reviewed" and thereby profits your competitors, when you can hire someone or something to do something faster at a fraction of the cost? Consider that before you consider who is and who is not the parasite.

Second, AI and cybernetics are not the only new methods emerging. As mentioned before, we have VR and AR, so called "Sixth Sense" technologies. There are Smart Swarm Networks (SSN's) that the military has been experimenting with for years. Basically artificial organisms with an accelerated life cycle that are able to collectively learn from past mistakes. And last I will mention hyperthought, where a biosynthesized brain is able to simply understand certain principles of psychology intuitively that are far ahead of what most scientific psychologists are able to predict for decades.

Physicist:

I was not making ad hominems. I was merely noting facts. The fact is my opponent has no formal training in many of these fields in which he comments. Look, before I WORKED for my degree, I used to think I knew a lot of things too.

The fact is formal academic training imbues a quality that just can't be replicated with searching the internet. You can't just look at internet encyclopedias like Wikis or search with engines like Google and know what a TRAINED scientist knows. The knowledge will be superficial.

And that is all this post-science is. Superficial. Facade.

There is genuine knowledge and Chinese Box knowledge.

Again like I said before all this technology would not have existed without science making it possible in the first place.


Questions:

1- Is science one method or many? What I mean to say is that biology and astrophysics are both called science, but are in fact studied in very different ways. Studying animals in the wild is very different then testing particles and equations in a lab, not the very least because natural observation is far more open to post-hoc, whereas in astrophysics correlation can almost always be assumed to equate to causation. Likewise, with say, psychology, and astronomy. Given that, does "science" even exist as anything more then a convenient cover?

Phys: Good question. This goes into the heart of my argument about general principles shared between various disciplines. On the surface, sure- biology and say, psychology or astrophysics are very different methods, almost so different it may not make sense to call them both science. I mean call them both science makes it sound like they are sort of the same right? Like you can go to school and take one class in something called "science" and go out to do zoology, or study Big Bang theory. *laughter*

But both are mostly based on empirical evidence. Both are falsifiable. Both are peer reviewed. And these may even be in different ways, but underneath it all is the same method.

NS: I would just like to qualify the assertion regarding falsification. As I noted earlier, the concept has been vague and derivative from the original formulations of Karl Popper. Popper's falsificationalism began with strong statements like "a theory has to be falsifiable period." But soon changed it to something like "liable to be falsifiable by practical means" and then to three part statements where the conditions were:

1. is liable to be falsified by data,

2. is tested by observation and experiment, and

3. makes predictions.

But as had been said "Kuhn and others observed that no science in fact looks like this model."

This lead philosophers like Paul Feyerabend to argue that there was in fact no scientific method whatsoever. I disagree.

Now at days falsifiable means something like "liable to be falsifiable by an ordinary understanding of the term". And generally what is meant by that is parsimony and coherence. In vaug sense, I would say post-science more relies on the underlying principles of falsification, the Popper's idea of falsification itself.


2- Aren't you implying science will last forever?

Phys: That's a meaningless question in my opinion. Much like the liar paradox, certain phrases must have referentials to make sense. The idea of science lasting forever is such a statement because science by its very nature rests on certain philosophical assumptions that are non-referential.

I'm not saying science is all there is out there. There is religion, philosophy, and art. But to argue that science will some day become obsolete as the best way of describing our natural universe makes no sense to me.

NS: Well as you can tell that's exactly what he is saying, and he is using circular and metaphysical reasoning to say it. Science, by definition, in his mind has to be true.

Aristotelian philosophers and Platonic Scholastics made similar arguments in the middle and high middle ages.

3- Will Post-Science someday be replaced?

Phys: Ha! Another meaningless question.

NS: Perhaps. We've already speculated on the notion of post-technological societies. Such a notion is currently beyond our comprehension with respect to anything aside from the vaguest conceptions, but we project that if such a society developed it would certainly make post-science, at least in its concrete form, obsolete.

Closing Statements: Phys: I just want to remind everyone here that the only reason we can even have this debate is because of science.

Science took us from the barbarism of the middle ages, to the highly advanced post-industrial society we have today.

I will now end by saying "In science, the burden of proof falls upon the claimant; and the more extraordinary a claim, the heavier is the burden of proof demanded."

The idea that some post-science will make science obsolete is a very extraordinary claim indeed!

NS: My response will not be as passionate as my opponent's.

With respect to the claim being extraordinary, if you analyze the meaning of the term you find that it refers to background knowledge. The more a claim departs from background knowledge, the more extraordinary it will seem. For someone trained in science with a vested interest the idea of a post-science is of course extraordinary. For someone who deals with the newer methods of cybernetics, and AI research programs, and Sixth Sense tech, it is mundane. As time progresses, more people will become more familiar with these new technologies, and the notion that traditional science is outdated will become commonplace. As it has already become commonplace to the most relevant parties. "

jake williams
10th March 2010, 19:07
What the fuck? They both sound insane. The NSer sounds like a new-agey looney, and the physicist comes off defensive and whiney.

Clearly using AI for analysis and synthesis is valuable. But as the physicist points out, that relies on original research to be viable. We just haven't done enough research (there probably is no such amount, at least comprehensible right now) to have all there left to be to do to synthesize it. We still need to data, and even new theory, which for the most part we still haven't seen AI being independently capable of. It's possible that in the future AI will also be able to theorize, but our notion of "science" implies human participation (because of our very notions of knowledge) in such a way that all we'll have happening is humans with an expanded analytical capacity, augmented by technology. You might want to call it a cybernetic revolution because of how rapid it happens and how much it changes the dimensionality of human intellectual capacity, but within the boundaries of our definitions of science and knowledge, that's still what's going on.

At any rate, there's a comment near the end that science is, in essence, unjustly self-justifying. Chomsky says good things about that (and a lot of other bullshit) here: http://www.chomsky.info/articles/1995----02.htm. The article is from 1995, well before the widespread use of the internet. It's still basically applicable today. I've angrily shaken my fist with this in hand on more than one occasion. Some of it is more directly applicable to the debate here than the rest of it; and some related comments are more directly applicable, but I don't have them at hand.


First, to take part in a discussion, one must understand the ground rules. In this case, I don't. In particular, I don't know the answers to such elementary questions as these: Are conclusions to be consistent with premises (maybe even follow from them)? Do facts matter? Or can we string together thoughts as we like, calling it an "argument," and make facts up as we please, taking one story to be as good as another? There are certain familiar ground rules: those of rational inquiry. They are by no means entirely clear, and there have been interesting efforts to criticize and clarify them; but we have enough of a grasp to proceed over a broad range. What seems to be under discussion here is whether we should abide by these ground rules at all (trying to improve them as we proceed). If the answer is that we are to abide by them, then the discussion is over: we've implicitly accepted the legitimacy of rational inquiry. If they are to be abandoned, then we cannot proceed until we learn what replaces the commitment to consistency, responsibility to fact, and other outdated notions. Short of some instruction on this matter, we are reduced to primal screams. I see no hint in the papers here of any new procedures or ideas to replace the old, and therefore remain perplexed.

...

With regard to the first problem, I'm afraid I see only one way to proceed: by assuming the legitimacy of rational inquiry. Suppose that such properties as consistency and responsibility to fact are old-fashioned misconceptions, to be replaced by something different--something to be grasped, perhaps, by intuition that I seem to lack. Then I can only confess my inadequacies, and inform the reader in advance of the irrelevance of what follows. I recognize that by accepting the legitimacy of rational inquiry and its canons, I am begging the question; the discussion is over before it starts. That is unfair, no doubt, but the alternative escapes me. [emphasis mine]

MarxSchmarx
12th March 2010, 06:11
It's possible that in the future AI will also be able to theorize, but our notion of "science" implies human participation (because of our very notions of knowledge) in such a way that all we'll have happening is humans with an expanded analytical capacity, augmented by technology.

The fundamental reason I don't see an AI getting even this far is that in the end, computer programs that proceed according to binary processes that are inherent in their construction. As such, they are a "formal system" and must run into undecidability problems sooner or later. I cannot see how a computer program, no matter how sophisticated, can get around that.

Dermezel
13th March 2010, 10:29
The fundamental reason I don't see an AI getting even this far is that in the end, computer programs that proceed according to binary processes that are inherent in their construction. As such, they are a "formal system" and must run into undecidability problems sooner or later. I cannot see how a computer program, no matter how sophisticated, can get around that.

http://en.wikipedia.org/wiki/Analog_computer

Non-Binary Computer Breakthrough (http://www.tomshardware.com/forum/59034-28-binary-computing-breakthrough)

Dermezel
13th March 2010, 10:30
BTW, this is speculative fiction. I have no idea why it keeps getting moved to non-fiction.

ckaihatsu
13th March 2010, 17:06
http://en.wikipedia.org/wiki/Analog_computer

Non-Binary Computer Breakthrough (http://www.tomshardware.com/forum/59034-28-binary-computing-breakthrough)


These alternative engineering designs for computation are irrelevant to the issue of how a final, "cut-off" determination is made over a complex problem. This could be considered the 'Holy Grail' for AI since it would demonstrate an actual, "free will" kind of decision independent of any pre-programming.





[An AI is] a "formal system" and must run into undecidability problems sooner or later. I cannot see how a computer program, no matter how sophisticated, can get around that.


For example, what if we presented an AI with the common, seemingly mundane problem of whether two people should stay together in a relationship or break up. Many *people* would immediately know what questions [and follow-up questions] to ask and how to learn about the situation in order to render an informed opinion. But for a machine dependent on its programming it would either have to be pre-programmed with certain pre-defined variables and algorithms -- thus implicitly just extending human judgments -- or else it would be *incapable* of independently reaching an appropriate conclusion.

Dermezel
14th March 2010, 04:53
For example, what if we presented an AI with the common, seemingly mundane problem of whether two people should stay together in a relationship or break up. Many *people* would immediately know what questions [and follow-up questions] to ask and how to learn about the situation in order to render an informed opinion. But for a machine dependent on its programming it would either have to be pre-programmed with certain pre-defined variables and algorithms -- thus implicitly just extending human judgments -- or else it would be *incapable* of independently reaching an appropriate conclusion.

A lot of this depends on how you define free will. If you define it in the Cartesian sense, then yeah it may be impossible to ever accept the idea that an AI has free will.

If you define it in the Marxist-Dialectical Materialist sense: as conscious self-determination with a deterministic world, then an AI can have free will.

To answer your question, all you would have to is grant the AI consciousness (self-awareness and the ability to reflect on cause and effect) and some basis for empirical knowledge/logical and heuristic reasoning.

It can then see the situation in context of background knowledge, and know that asking questions leads to more knowledge.

ckaihatsu
14th March 2010, 16:50
A lot of this depends on how you define free will. If you define it in the Cartesian sense, then yeah it may be impossible to ever accept the idea that an AI has free will.

If you define it in the Marxist-Dialectical Materialist sense: as conscious self-determination with a deterministic world, then an AI can have free will.

To answer your question, all you would have to is grant the AI consciousness (self-awareness and the ability to reflect on cause and effect) and some basis for empirical knowledge/logical and heuristic reasoning.

It can then see the situation in context of background knowledge, and know that asking questions leads to more knowledge.


It's very *easy* for us to *say* things like "grant the AI consciousness" and "[let the AI] see the situation in context of background knowledge", but with both goals we're still running into the 'undecidability problem' -- namely how would an AI know *what the boundaries are* for defining a domain relative to the question at hand?

An AI might have *terrific* learning abilities and an expansive database of heuristic-structured knowledge available, but that *doesn't* automatically confer an ability to *decide* appropriately *when* a "situation" is present, *what* that situation is exactly, *how much* time to devote to it, *what its interest* is in it, *how many* questions to ask, etc. -- the *least* bit of human programming applied to these questions would be the extension of *human* value judgments into the AI, and would *negate* any independent "consciousness" or "free will" that it might otherwise be able to claim.

I've borrowed from Marx to produce an all-encompassing model of material surpluses and deficits:


communist economy diagram

http://i48.tinypic.com/2iiitma.jpg


I've toyed with the idea that this kind of universal framework might be of some practical value to the structuring of an AI's overall heuristics -- certainly it's meant to assist *our own* "internal intelligent heuristics"...!

jake williams
15th March 2010, 19:18
I might say that the fact that the piece is fictional makes it even more ridiculous, although it's somewhat comforting to think it's not an actual practicing physicist who is making those claims. He's being quite the mysticist about science when he makes the claim that there's some sort of a magical qualitative distinction between the understanding which one gains from the process of obtaining a university degree, and that which could hypothetically be garnered from even intensive study on the internet. The distinction doesn't exist. It might be a matter of fact that most people who learn science from the internet have less of an understanding than people who study in laboratories, but it's in no way necessary.

Dermezel
22nd March 2010, 05:18
It's very *easy* for us to *say* things like "grant the AI consciousness" and "[let the AI] see the situation in context of background knowledge", but with both goals we're still running into the 'undecidability problem' -- namely how would an AI know *what the boundaries are* for defining a domain relative to the question at hand?

Would you agree that the Darwinian method of evolution is completely naturalistic? If so, why can that produce intelligence but not technological advancement which is many times more efficient?

Dermezel
22nd March 2010, 05:18
I might say that the fact that the piece is fictional makes it even more ridiculous, although it's somewhat comforting to think it's not an actual practicing physicist who is making those claims. He's being quite the mysticist about science when he makes the claim that there's some sort of a magical qualitative distinction between the understanding which one gains from the process of obtaining a university degree, and that which could hypothetically be garnered from even intensive study on the internet. The distinction doesn't exist. It might be a matter of fact that most people who learn science from the internet have less of an understanding than people who study in laboratories, but it's in no way necessary.

Well the idea that we can never get beyond science with sufficient technology is somewhat mystical in general. The fact that we have advanced more technologically in 200 years then in the past 2000 emphasizes this point.

MarxSchmarx
22nd March 2010, 12:47
Originally Posted by ckaihatsu
It's very *easy* for us to *say* things like "grant the AI consciousness" and "[let the AI] see the situation in context of background knowledge", but with both goals we're still running into the 'undecidability problem' -- namely how would an AI know *what the boundaries are* for defining a domain relative to the question at hand?
Would you agree that the Darwinian method of evolution is completely naturalistic? If so, why can that produce intelligence but not technological advancement which is many times more efficient?

That's a fair analogy and the issue is most likely with using the computational analogy to the human brain or nature more generally. Analog computers would still have to make decisions about the truth-value of certain statements, and you can't get around undecidability on anything that is constructed/programmed with a set of rules necessary for its operation. On a more practical level, I also suspect that there are probably physical limits to how much can be calculated and modeled using the sort of computer technology we have today. But perhaps all that quantum computing stuff if realized would make this possible.

It's something of a puzzle, to be sure, why we don't seem to see this same undecidability problem in the human brain given that it is a physical phenomena. The most likely explanation is that mathematics or computation are in the end models of nature, and the rules of formal logic that dictate their operation as such should not be confused with natural processes themselves. It's the old confusing the finger with the moon thing. We have become very adept at modeling physics using computation. However we still can't do very well in areas like predicting the weather, and in areas like biology and certainly psychology, the success has been patchier still.

ckaihatsu
22nd March 2010, 17:56
Would you agree that the Darwinian method of evolution is completely naturalistic? If so, why can that produce intelligence but not technological advancement which is many times more efficient?


We're being too reductionistic if we're going to limit ourselves to discussing only the *nuts and bolts* of an intelligence.

I think we can't avoid discussing *identity* and *self-motivation*, which are individual traits that have to be supported by society as a whole, and have to be built up by the individual using an acquired sense of self-empowerment.

An AI box, on the other hand, might never be "birthed" and "nurtured" by society in quite the same way -- there *has* to be a widespread social atmosphere of *supporting* a new form of life if it is to even have a chance at an "independent" identity.

On the simpler side of things I think that an AI could be built that could suitably "fake" a fairly sophisticated personality, ELIZA-style, given a suitable set of heuristics that is based on materialism, as I alluded to previously.

But perhaps because of the limitations of the current corporate / neoliberal material base of such research projects there can't be a *true*, fully autonomous AI built since the financial backers couldn't run the risk of it becoming *too* "freethinking"...! (It might wind up choosing to be a socialist!)

= )

Dermezel
22nd March 2010, 18:01
That's a fair analogy and the issue is most likely with using the computational analogy to the human brain or nature more generally. Analog computers would still have to make decisions about the truth-value of certain statements, and you can't get around undecidability on anything that is constructed/programmed with a set of rules necessary for its operation. On a more practical level, I also suspect that there are probably physical limits to how much can be calculated and modeled using the sort of computer technology we have today. But perhaps all that quantum computing stuff if realized would make this possible.

Well our genes, and brain structure, and conditioning and exegentic factor are basically just biological programming.


It's something of a puzzle, to be sure, why we don't seem to see this same undecidability problem in the human brain given that it is a physical phenomena. The most likely explanation is that mathematics or computation are in the end models of nature, and the rules of formal logic that dictate their operation as such should not be confused with natural processes themselves. It's the old confusing the finger with the moon thing. We have become very adept at modeling physics using computation. However we still can't do very well in areas like predicting the weather, and in areas like biology and certainly psychology, the success has been patchier still.

A lot of that is due to inputting a logical positivist as opposed to a Dialectical Materialist worldview. Logical Positivism cannot allow for contradictory information, so heuristic principles or inductive statements like "The sun will probably rise tomorrow because it did so every time in the past" or "If I want to accomplish X goal, I need to do Y strategy and action" are not justifiable.

Adding Dialectical Materialist thought will allow the AI to realize it has a real material consciousness that can contradict other deterministic-material factors with its own actions and decisions.

Dermezel
22nd March 2010, 18:03
We're being too reductionistic if we're going to limit ourselves to discussing only the *nuts and bolts* of an intelligence.

I think we can't avoid discussing *identity* and *self-motivation*, which are individual traits that have to be supported by society as a whole, and have to be built up by the individual using an acquired sense of self-empowerment.

An AI box, on the other hand, might never be "birthed" and "nurtured" by society in quite the same way -- there *has* to be a widespread social atmosphere of *supporting* a new form of life if it is to even have a chance at an "independent" identity.

On the simpler side of things I think that an AI could be built that could suitably "fake" a fairly sophisticated personality, ELIZA-style, given a suitable set of heuristics that is based on materialism, as I alluded to previously.

But perhaps because of the limitations of the current corporate / neoliberal material base of such research projects there can't be a *true*, fully autonomous AI built since the financial backers couldn't run the risk of it becoming *too* "freethinking"...! (It might wind up choosing to be a socialist!)

= )

We can birth and nurture it by means of the internet or by giving it sensors and limbs by which we can interact. Or we can even teach it the causal necessity and mechanisms and dialectical process of birthing and nurturing and it can birth and nurture itself by replicating these mechanisms in an internal environment i.e. split up its own personality into an internal society that rapidly evolves and may even have revolutions.

ckaihatsu
23rd March 2010, 18:45
split up its own personality into an internal society that rapidly evolves and may even have revolutions.


This is the *biggest* "if" factor -- we, as organic biological organisms, have all of our advanced cognitive and social abilities because of the *complexity* of our brains. We can *easily* hold contradictory concepts in our minds at the same time without confusion or crashing because we have an internal "thought space" that's insulated / independent from our behavior and external reality.

But we should be aware of *how* we are able to *develop* that internal thought space. That's where the larger society and nurturing / education come into play. Also, reflecting society's complexity, not everyone's internal foundation of precepts is the same, so what is imaginary / conceptual for some will be *absolute truth* / operational for others.

The thesis here is whether we'd be able to reproduce *inorganically* all of the complexity and "levels" of complex functioning that evolution has produced in *organic* beings. The brain is a massively *parallel* network that's structured to give rise to a "top-level" entity consciousness so as to "steer" the whole being through the external world.

Several approaches have been used separately and in combination in attempts to replicate this biological functioning -- I used to be more into this field when it gained prominence in the early '90s but my interest (and the field) has tapered off since then....