CommunityBeliever
10th October 2011, 09:31
Dear revleft OI members, to understand where we are today we need to understand history. Artificial intelligence (AI) has a terrible history that goes back to the Lighthill Report which baselessly slandered AI, which subsequently encouraged venture capitalists and state bureaucracies to direct their funding to "more pragmatic purposes."
Well this was a major setback, computing was still in its infancy, so AI was able to recover and reach untold heights with the Lisp machines. However, eventually the Lisp machine market collapsed and we plunged into a devastating AI winter. All funding for AI research was halted.
Around the same time happened to be a "PC revolution" which firmly established anti-AI foundations of computing, including architectures constrained by the Von Neumann bottleneck (http://en.wikipedia.org/wiki/Von_Neumann_architecture#Von_Neumann_bottleneck), bloated software systems like UNIX, painful to use WIMP user interfaces, and even flawed keyboard layouts like QWERTY. Despite new developments like memristors, we haven't recovered from this, and I still await an AI summer.
A detailed analysis of the state of society, clearly shows that immanent deficiencies extend far outside of computing to such areas as our city designs and our energy networks. I never thought of classifying this as a technological dark age, until I read the writings of other Lispers (http://www.loper-os.org/?p=21) who classify it as such. Well capitalism was a useful social system for the transition away from feudalism, and for industrialisation, it has long since outlived its usefulness. It is time that we had a social revolution so that we can set technology back on track, Lisp is one solution we can use to that effort as I will explain in this article.
Mathematical foundations
To start off, in cognitive science we study the nature of intelligent agents. Intelligent agents are based upon the perception-action cycle (PAC). They recieve a stream of inputs through their sensory receptors and they effect the environment using their effectors.
http://img231.imageshack.us/img231/2931/agenti.jpg
Recall that we receive our sensory perceptions in a stream, or one after another. This leads us to perceive a twoity from a precedent event which leads to some consequent event. When divested of all quality, this twoity is the basic substratum behind all of mathematics:
precedent → consequentThe Lisp programming language is based upon this intuitive concept. We refer to this twoity as cons cell (http://en.wikipedia.org/wiki/Cons):
(precedent . consequent)The parenthesis notation Lisp is well known for are just a means of expressing arbitrarily large sequences:
0 → 1 → 2 → 3 → 4 → 5
(0 1 2 3 4 5)Always specifying one exact thing that leads to another exact thing can be cumbersome. We also want to express the idea of one sort of thing leading to another sort of thing. We refer to these as functions. The precedent of the function is known as the input and the consequent of a function is known as its output:
function :: input → outputA function is sometimes refereed to as a lambda and it is studied in the lambda calculus (http://en.wikipedia.org/wiki/Lambda_calculus), which is a very important part of Lisp. A Lisp lambda may look like:
(λ (n)
(* n n))Using the concept of sequences and functions as foundations, John Mccarthy introduced the ten axioms of Lisp in 1960: atom, quote, eq, car, cdr, cons, cond, lambda, label, apply.
Moving beyond these basic foundations, we are going eventually going to want to create sequences for which we only know the first few elements and we can create the next ones using (these sequences are sometimes referred to as lazy). One lazy sequence we are all familar with are the counting numbers:
(0 1 2 3 4 5 6 7 8 9 10 ...)The foundation of the counting numbers is the number 0 and the rest of them after that can be created using the increment function. This leads to a simple iterative sequence:
(iterate inc 0)Once we are able to express the infinite sequence of counting numbers we can move on to other sequences like the infinite sequence of factorial numbers:
(def factorial
(reductions * (cons 1 (rest (range)))))This will function will evaluate to the following sequence:
(1 1 2 6 24 120 ...)Then when you want to you can acquire new values in the sequence like 720 and 5040 on demand.
The counting numbers work perfectly when we are working with discrete spaces, but once you began to deal with any sort of continuum, what you are going to want is the rational numbers, which can simply be expressed by cons a denominator and and a sign onto a numerator:
(sign numerator denominator)But even the rational numbers are limited as we will eventually want to handle the computable numbers, like pi, e, and the golden ratio. These can all be expressed in terms of cauchy sequences (http://en.wikipedia.org/wiki/Cauchy_sequence). For example, the sequence for e may expressed as
(def e
(infinite-series
(fn [n]
(/ (factorial n))))That lazy sequence evaluates to:
(0 1 2 5/2 8/3 65/24 163/60 1957/720 685/252 109601/40320 ...)Notice that the last available value in that sequence is 109601/40320 or 2.7182 which is approaching the value of e, so this sequence is an effective representation of the number. The other common computable sequences can be expressed similarly:
(def pi
(infinite-series
(fn [n]
(/ (* 4 (expt -1 n))
(+ (* 2 n) 1)))))
(def golden-ratio
(map
(fn
(/ (nth fib (inc i))
(nth fib i)))
(rest (range))))Note that this implementation of the golden ratio first requires a definition of the fibonanci sequence (http://rosettacode.org/wiki/Fibonacci_sequence#Clojure).
Now we have seen the capabilities of sequences, however, it is also known that there are other types of collections used in mathematics. The next type of collection we will introduce is the list, which is a function on the counting numbers, as such lists can be called like any other function:
(["a" "b "c"] 1) ;=> "b"The collections we know of as a set is just represented as predicate functions which can be defined extensionally as orderless and unique sequences. This is vastly different from Zermelo-Fraenkal set theory with the axiom of choice, which is generally used as a foundation of mathematics.
Computational foundations
Lisp is more then just a mathematical notation, it is also a means of building computing systems. There are several elements to achieving this:
Persistent data structures:
In computing, a persistent data structure is a data structure that preserves all previous versions of itself. The most important persistent data structure is the singly linked list:
http://upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Cons-cells.svg/200px-Cons-cells.svg.png
The singly linked list is a physical implementation of the Lisp cons cell. Each cons cell has some first value and it points to previous values created in the sequence. Singly linked list lists can be used to express must structures, for example, association lists (http://en.wikipedia.org/wiki/Association_list) can be used to represent associative arrays.
http://upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Binary_tree.svg/200px-Binary_tree.svg.png
Furthermore, tree data structures, which point to an arbitrary number of previous nodes, can be implemented persistently.
Single address space:
The fundamental point that determines our use of address spaces is the size of our processors word. For example, 32-bit processors have 4 GB of byte-addressable memory, which is clearly too small for most modern tasks, on the other hand 64-bit processors with have 18.45 exabytes of byte-addressable memory, which should be more then sufficient for modern computers.
Given a single address space, every object in memory is given an immutable address, and pointers become perfect object references. This means that the systems linked data structures (http://en.wikipedia.org/wiki/Linked_data_structure) including its persistent data structures, will not lose their meaning.
On the other hand, having multiple address spaces poses a huge problem because it means that when we have a linked data structure, if we want to store it to disk it or communicate it to another process it may completely lose its meaning. A variety of solutions are used to approach this problem like file systems, inter process communication, attening, pointer swizzling, etc.
Homoiconicity:
In Lisp code is treated like any other type of data. This property is sometimes referred to as homoiconicity. Code is represented as a data structure like any other item in the system:
http://www.redhat.com/magazine/002dec04/features/gcc/figs/ast.png
As such, the Lisp machines are effectively one database that maps immutable addresses to code / data segments, which are versioned using persistent data structures, and transparently compiled when necessary. (see why do we need modules (http://erlang.org/pipermail/erlang-questions/2011-May/058769.html)), which includes reflectivity / introspection and basic automatic code generation.
Abstractions:
One of the principles of Lisp, is that there is no coherent means of "low level programming" or "high level programming". Instead there is "Lisp all the way down" which means that Lisp is used all the way down to the machine hardware itself, and abstractions are progressively introduced using functions and macros.
(defmacro square
[n]
`(* ~n ~n))When used that macro generates a separate piece of code:
(square 5) ;=> (* 5 5)This is principle is represented by self-hosting compilers, the first of which is written for Lisp by Hart and Levin in 1962. In a self-hosting compiler you implement your language in itself. For example, the reductions function, which I used to define factorial, is defined within Lisp itself:
(defn reductions
"Returns a lazy seq of the intermediate values of the reduction (as
per reduce) of coll by f, starting with init."
{:added "1.2"}
([f coll]
(lazy-seq
(if-let [s (seq coll)]
(reductions f (first s) (rest s))
(list (f)))))
([f init coll]
(cons init
(lazy-seq
(when-let [s (seq coll)]
(reductions f (f init (first s)) (rest s)))))))
Automation
As Recall that intelligent agents consist of two components: preceptors and effectors. One of the fundamental goals of AI is to automate the activities of our effectors, and naturally Lisp offers many methods to achieve this goal.
Memory management:
Manual memory management is very time consuming, it is error prone because it can lead to dangling pointer bugs, double free bugs, memory leaks, memory smashes, etc, it makes you deal with the details of the memory architecture which may hinder portability, and all the code you that you add to deal with memory management increases paging which generally isn't outdone by whatever performance advantages you may acquire relative to modern generational copying collectors.
Furthermore, our use of persistent data structures makes it rapidly infeasible to determine how many previous versions share which parts of a structure, and it often becomes desirable to discard old versions. This necessitates an environment with garbage collection and this explains why garbage collection is a core feature of functional programming languages.
In the future, our garbage collectors can be extended and configured to deal with particular tasks, and they can be made to use AI search algorithms to find the best means of memory management.
Persistence:
A natural consequence of having a single address space is that you can automatically transfer all data from RAM to persistent storage. This has the effect that the process of "saving" and "loading" that has become very familiar in UNIX based operating systems will be completely automated by the system. If there is a blackout you won't lose your work.
Compilation:
In conventional operating systems you have to deal with the compile/debug/pray cycle. In the Lisp machines all programs should come in the form of readable source code and their execution, including interpretation and compilation, will be automatically handled by the machine.
Performance
There is a misconception that Lisp is "slow" which probably itself arises from the pitiful speed of executing AI algorithms on non-AI hardware (e.g contemporary Von Neumann machines) not from any deficiency in Lisp itself, which is even capable of directly representing machine instructions.
Manual performance
This arises from specifically laying out an instruction sequence for some effects system platform (e.g a physical ISA or a virtual machine like the JVM, parrot, or the CLR) to execute. The performance of this sort of code is platform-dependent (in other words it isn't really portable). For example, although assembly language is incredibly fast on the ISA it is written for, it loses this efficiency when you have to use hardware vitalisation to run it. Lisp is perfectly capable of this same sort of performance, through embedding assembly, for example with the Lisp assembly program (LAP):
(TWNEI NARGS 4)
(MFLR LOC-PC)
(BLA .SPSAVECONTEXTVSP)
(VPUSH ARG_Z)
(LWZ NARGS 331 RNIL)
(TWGTI NARGS 0)
(LI ARG_Y '1)
(LWZ ARG_Z 0 VSP)
(BLA .SPRESTORECONTEXT)
(MTLR LOC-PC)
(BA .SPBUILTIN-PLUS)In a similar manner to assembly, C is efficient when you explicitly specify an algorithm for its host platform, "UNIX", but when you want to move out of that platform, or if you want to create any sort of automation the language falls short (previously explained here (http://www.revleft.com/vb/showpost.php?p=2252233&postcount=11)). Without automatic memory management, the basic substratum of all advanced automation, C remains limited to manual performance and to "low level programming". This shouldn't be hailed as an advantage as it is today by some developers in our technological dark age.
Automatic performance
This arises from a description of what you want to do, rather then how to do it, namely a declarative (e.g functional) specification. The system will use AI search algorithms (http://www.anonym.to/?http://en.wikipedia.org/wiki/Search_algorithm) to find the most efficient solution, for example, the system may use a genetic algorithm and a fitness function to optimise its solutions. One of the features of AI operating systems / machine architectures like Lisp is that they will vastly improve our means of automation.
Responses
You're really hung up on LISP, aren't you?
I am "really hung up" on artificial intelligence (AI). The main AI programming language is still Lisp, followed by the AI logic programming language Prolog (http://en.wikipedia.org/wiki/Prolog), which is itself generally embedded in Lisp.
You can call any language "building material", but it's still a programming language.Lisp is a separate foundation of computation, that can be used as a building material for programs, and as a means of creating entire machine architectures - the Lisp machines. In this sense, it certainly is distinct from most non-AI languages.
if (a == b)
c = !d;
else
c = d + b;
Do me a favor and let me first post a proper reply to your examples. Here is how this might be implemented in Lisp:
(set! c
(if (= a b)
(not d)
(+ d b)))If you look at the structure of the code, eventually you will get so used to it that other sorts of code will look worse. Furthermore, here are some specific concerns with your code example:
It uses = and == for both assignment and equality testing, which can be a source of confusion.
It uses the assignment operator = twice rather then once, which means it has excess side effects.
too complexLisp is based upon very simple foundations: the cons cells and lambda calculus.
Lisp is a [I]1950s programming language...We must consider something in terms of its technical merits relative to its age. Since Lisp is a 1950s language that is still technically the state of the art today, that should tell you something.
What's your proposed solution to kernels then?Kernels violate the foundational principle of maintaining a single address space, because kernels are based upon a separate kernel address space.
This has been clocked... C outruns LISP 99% of the time...
How to make Lisp go faster then C (http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBoQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2 Fdownload%3Fdoi%3D10.1.1.142.1262%26rep%3Drep1%26t ype%3Dpdf&rct=j&q=Common%20Lisp%20as%20fast%20as%20C%20pdf&ei=iqySTsaIDYHjiALYi4XjBg&usg=AFQjCNHVW68xhEg1s9BQ3vSril2GqpSbAQ&sig2=drDWdx4IsKyzEl_UDQi6wA&cad=rja)
LISP is considered a sluggish language, even slower than Java.
Please read the above description of performance which refutes this ignorant conclusion.
Imperative, OOP is the most intuitive thing.1) Imperative programming is intuitive and it is an integral part of any useful program, but it should be integrated with functional programming
2) Mutable OOP as defined by C/Java/C# leads to bloat as you end up developing separate classes to deal with everything and its mutable nature makes concurrency / parallelism / distributed programming problematic (see Erlang for a good alternative and Rich Hickey's OOP explanation). Multimethods (http://clojure.org/multimethods) deal with the same task more effectively.
Critique of non-AI languages
Various programming languages have been produced over time to deal with things besides AI, for example, the scripting languages deal with scripting applications. C on the other hand, was invented with the specific goal of UNIX systems programming, and Java and C# were invented to work with their respective virtual machines. I will critique each of these languages here.
UNIX systems languages
The C programming language was developed between 1969 and 1973 by Dennis Ritchie and Brian Kernighan specifically to build UNIX. Nonetheless, the language is based upon flawed foundations, perhaps the most important of which is its lack of GC support which thereby prevents it from having idiomatic support for FP or concurrency / parallelism / distributed programming.
The D programming language (http://en.wikipedia.org/wiki/D_%28programming_language%29) was created by Walter Bright as the successor to C, hence it is name is the next letter in the Roman alphabet. The language adds much needed features like automated memory management and its removes bad features like the C preprocessor. Despite being a thorough improvement over C, D has failed to failed to approach C's popularity after over a decade.
This might make userland programming with C a bit more difficult, and make managed languages more appealing, but this is a major strength in unavoidable cases.
Manifest typing encourages good program structure and consistency of type usage, and allows programmers to have tight control over what types are used.
Most of us have seen films were a character turns off auto-pilot to drive manually. Well automation is incredibly advantageous, it saves work, and it can often produce better results then manually activity, there are some exceptional cases were you want to do things manually. These exceptional cases are an argument for optional manual controls, they are not an argument against automation itself, after all, one of the fundamental reasons we have constructed computers is to automate repetitive tasks.
The D programming language brings at least two new forms of automation to the table: automatic memory management, and automatic type inference. For example, the expression you described previously could be expressed in D:
auto x = 0; // uintAs such C is inferior relative to D. A good start at fixing this problem would be to restrict C to the kernel-space of MASOS's. This could leave us to use secure/portable GC language as the basis of user space (e.g Java/C#) which would lead to huge advantages such as transparent distrubition across the network. Unfortunately, the most popular user-space systems are based upon C/C++ today, e.g GNOME is C based, KDE is C++ based, and Cocoa is objective C based.
Yeah, how stupid to have a very useful feature that can let you customize the way you code and actually make things less cryptic...
I recommend that you read the explanation as to why the C preprocessor is deficient on the D programming language site:
http://www.digitalmars.com/d/2.0/pretod.html
Back when C was invented, compiler technology was primitive. Installing a text macro preprocessor onto the front end was a straightforward and easy way to add many powerful features. The increasing size & complexity of programs have illustrated that these features come with many inherent problems. D doesn't have a preprocessor; but D provides a more scalable means to solve the same problems.
What I believe is that C will eventually go the way of the dinosaurs. That chart actually supports the idea, as it shows C popularity in a technical downtrend. If that was a stock chart, I'd consider shorting the stock or buying put options against it.
C has been going up in its relative popularity and it is going to become number one again. If you combine C, C++, and Objective C all into one they trump all other programming languages hands down and they can account approximately 32% of the modern software industry. Consider the incredible rise in Objective C use:
http://www.tiobe.com/content/paperinfo/tpci/images/history_Objective-C.png
Would you be willing to bet on an incredible rise like that?
Duh, these are not language features... these are OS API features, all of which are available in C on any self-respecting OS...A language should have idiomatic support for concurrency and distribution within its core. It was definitely specified that C would not deal with concurrency when it was created in 1969, which has led to the separate concurrent C programming language (http://books.google.com/books/about/The_Concurrent_C_programming_language.html?hl=pt-BR&id=ZDRBv8U_bJ0C) and other attempts to address the languages flaws.
Because there are no strings... A string is just an abstraction of a character array. And manipulating a character array is quite simple.Manipulating "character arrays" in C is far from "simple" which is precisely why programming languages like Perl (http://en.wikipedia.org/wiki/Perl) and Awk (http://en.wikipedia.org/wiki/AWK) became popular tools on UNIX in the first place. Perl has regular expressions and other string handling features and it was marketed as a "text processing language." If C really had a good means of handling character arrays, then these alternative systems may never have arose on UNIX.
Again, this has nothing to do with the language but the standard libraries. Yes, it is a bit cryptic. No, it's not that hard to learn what they do.Early C implementations had a limited string size to fit their identifiers, which is the only reason we have such cryptic names as "isalnum." There is no reason this should've happened in the first place, let alone for us to continue to utilise this today.
Erm... so does LISP (don't you hate my using all caps for the name? ;))... This is so trivial and silly my initial response can only be a double-facepalm...The obfuscated code contest is made possible by all the poor design features in C like the use of the position sensitive ++ and -- operators and the cryptic names like "isalnum" that arose from an era of limited string space:
char*O=" <60>!?\\\n"_ doubIe[010]_ int0,int1 _ Iong=0 _ inIine(int eIse){int
O1O=!O _ l=!O;for(;O1O<010;++O1O)l+=(O1O[doubIe]*pow(eIse,O1O));return l;}int
main(int booI,char*eIse[]){int I=1,x=-*O;if(eIse){for(;I<010+1;I++)I[doubIe-1]
=booI>I?atof(I[eIse]):!O switch(*O)x++)abs(inIine(x))>Iong&&(Iong=abs(inIine(x
)));int1=Iong;main(-*O>>1,0);}else{if(booI<*O>>1){int0=int1;int1=int0-2*Iong/0
[O]switch(5[O]))putchar(x-*O?(int0>=inIine(x)&&do(1,x)do(0,true)do(0,false)
case(2,1)do(1,true)do(0,false)6[O]case(-3,6)do(0,false)6[O]-3[O]:do(1,false)
case(5,4)x?booI?0:6[O]:7[O])+*O:8[O]),x++;main(++booI,0);}}}
This is, yet again, another useful capability and flexibility. You didn't address the vital point in that quote which is that if you forget a break you can introduce a silent error. This had led people to spend countless hours debugging their programs, which is why there is a general conception that you should avoid the C switch statement.
An = operator is for assignment. A == operator is for comparison. Learn the difference, and you'll be just fine.
Its easy to know the difference, that isn't what is at issue here. If you accidentally leave off one of the symbols that will totally change the meaning of the expression. This can arise from a typing accident and since the expressions are so similar it will be hard to discover the error.
I fail to see how stdio.h and stdlib.h are a problem.Here are the specific points of contention:
stdio.h contradicts the single address space principle since it has MASOS IO.
stdlib.h contradicts the automated manual memory principle since it contains malloc and free.
The best reason not to use GC is because you don't want to -- plain and simple. Plenty programmers are smart enough to manage their own memory allocations/de-allocations and work with addresses.
The fact that some people are "smart enough" to do things manually is an argument for optional manual controls, not an argument against automation itself. Considering this, automatic memory management remains an advantageous feature.
Manual memory management is not that difficult, at least to me.There is a common expression that goes, "it is easier said then done". When you write really large programs that begin to use persistent data structures, and you subsequently spend many days/weeks/months debugging your code looking for memory management errors, then get back to me.
Let's take, for example, the issue of graphics programming. Graphics hardware is growing insanely fast and powerful, but we still aren't close to true photorealism. We have to squeeze every drop of performance out of our GPUs to push triangles and texels to the screen. The only way you're going to get that fast is through native drivers. Then you need to communicate with those drivers, so we have native technologies like DirectX and OpenGL. After that, you can use managed code like SlimDX, OpenTK or XNA. The managed code works fine and can write great games, but it wouldn't be so great without super fast hardware and native drivers.
Modern non-AI languages are not suitable for everything. This ends up creating a mess of contradictory languages (including, scripting languages) and frameworks.
VM systems languages
The Java programming language was developed in 1995 by James Gosling for Sun Microsystems corporation. Java resolves many of the earlier problems of C like the fact that it isn't garbage collected, its lack of basic security features, and its senseless preprocessor, but in reality Java isn't much of an improvement because its mutable OOP paradgim that it has introduced is immanently deficient. This is covered by Rich Hickey who explains why Lisp (specifically Clojure) isn't OOP like Java and C#:
Q: Clojure is not object-oriented, why not?
A: Well it's not object-oriented the way Java, C# or C++ is. That's not really the way you would structure things. It is in some ways a rejection of my heritage, as an object-oriented programmer. I think after having done it for two decades, I don't believe in it anymore. I just don't think it's the right way to start. It can help you organize your code, but it brings along with it some complexity that I have found in real systems always ends up biting you—and it is related to mutability. By default, an object-oriented program is a graph of mutable objects. That's a very, very difficult thing to think about, debug and keep running. I've worked on very big object-oriented systems, and you always essentially run into the problems related to that architecture. I think that even before you get to concurrency, there are complexity problems with mutable objects that basically affect every large object-oriented application. When you add in concurrency, those problems become much clearer.
So a functional approach was something that I had already started doing, even in programs I was writing in C#. For instance, there were parts of the national exit poll system that were very functional, even though it's a C# system, because the way to defend yourself against this complexity is to write in a more functional style with a lot more immutability. The problem is that it's not very idiomatic in C# or Java to do so. I wanted to make a language where it was—where the default was to do the right thing. When you needed mutability, there would be a good story about how to do that compatibly with concurrency. The C# programming language was created by Microsoft as a direct copy of Java, and since then they have made the language far superior to Java by adding improvements like first class functions and type inference. Nonetheless, it still has flawed anti-concurrent foundations.
Scripting languages
The problem with the scripting languages is that they are too domain specific, that is to say that they are designed to deal with scripting some sort of thing. For example, JavaScript is designed to script client-side applications, and and PHP is used to script server side applications, both of these languages also have a large set of problems, JavaScript was rushed to the market, and due to the practices of Microsoft and other companies, it has never recovered from that. (see your language sucks (http://wiki.theory.org/YourLanguageSucks) for more details)
Energy
"Green energy", like I said, has become a "holy grail" sort of thing. Everyone wants to find the winning technology, and it is actively being sought with enthusiasm.That is certainly what is propagated by the *capitalist media* - but the actual truth is not so simple. We have known for nearly a century that we should create an enernetic world smart-grid-system for the wired/wireless transmission of electric energy, e.g Nikola Tesla's Wardenclyffe Tower / Robert Metcalfe's smart grid.
Additionally, the capitalist plutocracy has mismanaged nuclear energy development. We have known for decades that we should utilise thorium-fission, but that option hasn't been effectively utilised due to the un-weaponisability of thorium relative to uranium, and the possibility of techno-deprecation of possessed uranium capital. Furthermore, the plutocracy has a long history of fusion technology / plasma tools mismanagement, e.g the assassination of Eugene Mallove shortly after purportedly sustaining a fusion reaction, the suppression of Farnsworth's fusor by the IRS, etc.
Ultimately we have a depleting type-0 energy supply (according to the Kardashev scale) consisting of uranium, coal, oil, and other non-renewable energy sources, and these are running out which will result in more dangerous procurement methods, which in will turn result in more accidents like the Deepwater horizon oil spill and the Fukushima accident, which will lead to social turbulence.
Well this was a major setback, computing was still in its infancy, so AI was able to recover and reach untold heights with the Lisp machines. However, eventually the Lisp machine market collapsed and we plunged into a devastating AI winter. All funding for AI research was halted.
Around the same time happened to be a "PC revolution" which firmly established anti-AI foundations of computing, including architectures constrained by the Von Neumann bottleneck (http://en.wikipedia.org/wiki/Von_Neumann_architecture#Von_Neumann_bottleneck), bloated software systems like UNIX, painful to use WIMP user interfaces, and even flawed keyboard layouts like QWERTY. Despite new developments like memristors, we haven't recovered from this, and I still await an AI summer.
A detailed analysis of the state of society, clearly shows that immanent deficiencies extend far outside of computing to such areas as our city designs and our energy networks. I never thought of classifying this as a technological dark age, until I read the writings of other Lispers (http://www.loper-os.org/?p=21) who classify it as such. Well capitalism was a useful social system for the transition away from feudalism, and for industrialisation, it has long since outlived its usefulness. It is time that we had a social revolution so that we can set technology back on track, Lisp is one solution we can use to that effort as I will explain in this article.
Mathematical foundations
To start off, in cognitive science we study the nature of intelligent agents. Intelligent agents are based upon the perception-action cycle (PAC). They recieve a stream of inputs through their sensory receptors and they effect the environment using their effectors.
http://img231.imageshack.us/img231/2931/agenti.jpg
Recall that we receive our sensory perceptions in a stream, or one after another. This leads us to perceive a twoity from a precedent event which leads to some consequent event. When divested of all quality, this twoity is the basic substratum behind all of mathematics:
precedent → consequentThe Lisp programming language is based upon this intuitive concept. We refer to this twoity as cons cell (http://en.wikipedia.org/wiki/Cons):
(precedent . consequent)The parenthesis notation Lisp is well known for are just a means of expressing arbitrarily large sequences:
0 → 1 → 2 → 3 → 4 → 5
(0 1 2 3 4 5)Always specifying one exact thing that leads to another exact thing can be cumbersome. We also want to express the idea of one sort of thing leading to another sort of thing. We refer to these as functions. The precedent of the function is known as the input and the consequent of a function is known as its output:
function :: input → outputA function is sometimes refereed to as a lambda and it is studied in the lambda calculus (http://en.wikipedia.org/wiki/Lambda_calculus), which is a very important part of Lisp. A Lisp lambda may look like:
(λ (n)
(* n n))Using the concept of sequences and functions as foundations, John Mccarthy introduced the ten axioms of Lisp in 1960: atom, quote, eq, car, cdr, cons, cond, lambda, label, apply.
Moving beyond these basic foundations, we are going eventually going to want to create sequences for which we only know the first few elements and we can create the next ones using (these sequences are sometimes referred to as lazy). One lazy sequence we are all familar with are the counting numbers:
(0 1 2 3 4 5 6 7 8 9 10 ...)The foundation of the counting numbers is the number 0 and the rest of them after that can be created using the increment function. This leads to a simple iterative sequence:
(iterate inc 0)Once we are able to express the infinite sequence of counting numbers we can move on to other sequences like the infinite sequence of factorial numbers:
(def factorial
(reductions * (cons 1 (rest (range)))))This will function will evaluate to the following sequence:
(1 1 2 6 24 120 ...)Then when you want to you can acquire new values in the sequence like 720 and 5040 on demand.
The counting numbers work perfectly when we are working with discrete spaces, but once you began to deal with any sort of continuum, what you are going to want is the rational numbers, which can simply be expressed by cons a denominator and and a sign onto a numerator:
(sign numerator denominator)But even the rational numbers are limited as we will eventually want to handle the computable numbers, like pi, e, and the golden ratio. These can all be expressed in terms of cauchy sequences (http://en.wikipedia.org/wiki/Cauchy_sequence). For example, the sequence for e may expressed as
(def e
(infinite-series
(fn [n]
(/ (factorial n))))That lazy sequence evaluates to:
(0 1 2 5/2 8/3 65/24 163/60 1957/720 685/252 109601/40320 ...)Notice that the last available value in that sequence is 109601/40320 or 2.7182 which is approaching the value of e, so this sequence is an effective representation of the number. The other common computable sequences can be expressed similarly:
(def pi
(infinite-series
(fn [n]
(/ (* 4 (expt -1 n))
(+ (* 2 n) 1)))))
(def golden-ratio
(map
(fn
(/ (nth fib (inc i))
(nth fib i)))
(rest (range))))Note that this implementation of the golden ratio first requires a definition of the fibonanci sequence (http://rosettacode.org/wiki/Fibonacci_sequence#Clojure).
Now we have seen the capabilities of sequences, however, it is also known that there are other types of collections used in mathematics. The next type of collection we will introduce is the list, which is a function on the counting numbers, as such lists can be called like any other function:
(["a" "b "c"] 1) ;=> "b"The collections we know of as a set is just represented as predicate functions which can be defined extensionally as orderless and unique sequences. This is vastly different from Zermelo-Fraenkal set theory with the axiom of choice, which is generally used as a foundation of mathematics.
Computational foundations
Lisp is more then just a mathematical notation, it is also a means of building computing systems. There are several elements to achieving this:
Persistent data structures:
In computing, a persistent data structure is a data structure that preserves all previous versions of itself. The most important persistent data structure is the singly linked list:
http://upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Cons-cells.svg/200px-Cons-cells.svg.png
The singly linked list is a physical implementation of the Lisp cons cell. Each cons cell has some first value and it points to previous values created in the sequence. Singly linked list lists can be used to express must structures, for example, association lists (http://en.wikipedia.org/wiki/Association_list) can be used to represent associative arrays.
http://upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Binary_tree.svg/200px-Binary_tree.svg.png
Furthermore, tree data structures, which point to an arbitrary number of previous nodes, can be implemented persistently.
Single address space:
The fundamental point that determines our use of address spaces is the size of our processors word. For example, 32-bit processors have 4 GB of byte-addressable memory, which is clearly too small for most modern tasks, on the other hand 64-bit processors with have 18.45 exabytes of byte-addressable memory, which should be more then sufficient for modern computers.
Given a single address space, every object in memory is given an immutable address, and pointers become perfect object references. This means that the systems linked data structures (http://en.wikipedia.org/wiki/Linked_data_structure) including its persistent data structures, will not lose their meaning.
On the other hand, having multiple address spaces poses a huge problem because it means that when we have a linked data structure, if we want to store it to disk it or communicate it to another process it may completely lose its meaning. A variety of solutions are used to approach this problem like file systems, inter process communication, attening, pointer swizzling, etc.
Homoiconicity:
In Lisp code is treated like any other type of data. This property is sometimes referred to as homoiconicity. Code is represented as a data structure like any other item in the system:
http://www.redhat.com/magazine/002dec04/features/gcc/figs/ast.png
As such, the Lisp machines are effectively one database that maps immutable addresses to code / data segments, which are versioned using persistent data structures, and transparently compiled when necessary. (see why do we need modules (http://erlang.org/pipermail/erlang-questions/2011-May/058769.html)), which includes reflectivity / introspection and basic automatic code generation.
Abstractions:
One of the principles of Lisp, is that there is no coherent means of "low level programming" or "high level programming". Instead there is "Lisp all the way down" which means that Lisp is used all the way down to the machine hardware itself, and abstractions are progressively introduced using functions and macros.
(defmacro square
[n]
`(* ~n ~n))When used that macro generates a separate piece of code:
(square 5) ;=> (* 5 5)This is principle is represented by self-hosting compilers, the first of which is written for Lisp by Hart and Levin in 1962. In a self-hosting compiler you implement your language in itself. For example, the reductions function, which I used to define factorial, is defined within Lisp itself:
(defn reductions
"Returns a lazy seq of the intermediate values of the reduction (as
per reduce) of coll by f, starting with init."
{:added "1.2"}
([f coll]
(lazy-seq
(if-let [s (seq coll)]
(reductions f (first s) (rest s))
(list (f)))))
([f init coll]
(cons init
(lazy-seq
(when-let [s (seq coll)]
(reductions f (f init (first s)) (rest s)))))))
Automation
As Recall that intelligent agents consist of two components: preceptors and effectors. One of the fundamental goals of AI is to automate the activities of our effectors, and naturally Lisp offers many methods to achieve this goal.
Memory management:
Manual memory management is very time consuming, it is error prone because it can lead to dangling pointer bugs, double free bugs, memory leaks, memory smashes, etc, it makes you deal with the details of the memory architecture which may hinder portability, and all the code you that you add to deal with memory management increases paging which generally isn't outdone by whatever performance advantages you may acquire relative to modern generational copying collectors.
Furthermore, our use of persistent data structures makes it rapidly infeasible to determine how many previous versions share which parts of a structure, and it often becomes desirable to discard old versions. This necessitates an environment with garbage collection and this explains why garbage collection is a core feature of functional programming languages.
In the future, our garbage collectors can be extended and configured to deal with particular tasks, and they can be made to use AI search algorithms to find the best means of memory management.
Persistence:
A natural consequence of having a single address space is that you can automatically transfer all data from RAM to persistent storage. This has the effect that the process of "saving" and "loading" that has become very familiar in UNIX based operating systems will be completely automated by the system. If there is a blackout you won't lose your work.
Compilation:
In conventional operating systems you have to deal with the compile/debug/pray cycle. In the Lisp machines all programs should come in the form of readable source code and their execution, including interpretation and compilation, will be automatically handled by the machine.
Performance
There is a misconception that Lisp is "slow" which probably itself arises from the pitiful speed of executing AI algorithms on non-AI hardware (e.g contemporary Von Neumann machines) not from any deficiency in Lisp itself, which is even capable of directly representing machine instructions.
Manual performance
This arises from specifically laying out an instruction sequence for some effects system platform (e.g a physical ISA or a virtual machine like the JVM, parrot, or the CLR) to execute. The performance of this sort of code is platform-dependent (in other words it isn't really portable). For example, although assembly language is incredibly fast on the ISA it is written for, it loses this efficiency when you have to use hardware vitalisation to run it. Lisp is perfectly capable of this same sort of performance, through embedding assembly, for example with the Lisp assembly program (LAP):
(TWNEI NARGS 4)
(MFLR LOC-PC)
(BLA .SPSAVECONTEXTVSP)
(VPUSH ARG_Z)
(LWZ NARGS 331 RNIL)
(TWGTI NARGS 0)
(LI ARG_Y '1)
(LWZ ARG_Z 0 VSP)
(BLA .SPRESTORECONTEXT)
(MTLR LOC-PC)
(BA .SPBUILTIN-PLUS)In a similar manner to assembly, C is efficient when you explicitly specify an algorithm for its host platform, "UNIX", but when you want to move out of that platform, or if you want to create any sort of automation the language falls short (previously explained here (http://www.revleft.com/vb/showpost.php?p=2252233&postcount=11)). Without automatic memory management, the basic substratum of all advanced automation, C remains limited to manual performance and to "low level programming". This shouldn't be hailed as an advantage as it is today by some developers in our technological dark age.
Automatic performance
This arises from a description of what you want to do, rather then how to do it, namely a declarative (e.g functional) specification. The system will use AI search algorithms (http://www.anonym.to/?http://en.wikipedia.org/wiki/Search_algorithm) to find the most efficient solution, for example, the system may use a genetic algorithm and a fitness function to optimise its solutions. One of the features of AI operating systems / machine architectures like Lisp is that they will vastly improve our means of automation.
Responses
You're really hung up on LISP, aren't you?
I am "really hung up" on artificial intelligence (AI). The main AI programming language is still Lisp, followed by the AI logic programming language Prolog (http://en.wikipedia.org/wiki/Prolog), which is itself generally embedded in Lisp.
You can call any language "building material", but it's still a programming language.Lisp is a separate foundation of computation, that can be used as a building material for programs, and as a means of creating entire machine architectures - the Lisp machines. In this sense, it certainly is distinct from most non-AI languages.
if (a == b)
c = !d;
else
c = d + b;
Do me a favor and let me first post a proper reply to your examples. Here is how this might be implemented in Lisp:
(set! c
(if (= a b)
(not d)
(+ d b)))If you look at the structure of the code, eventually you will get so used to it that other sorts of code will look worse. Furthermore, here are some specific concerns with your code example:
It uses = and == for both assignment and equality testing, which can be a source of confusion.
It uses the assignment operator = twice rather then once, which means it has excess side effects.
too complexLisp is based upon very simple foundations: the cons cells and lambda calculus.
Lisp is a [I]1950s programming language...We must consider something in terms of its technical merits relative to its age. Since Lisp is a 1950s language that is still technically the state of the art today, that should tell you something.
What's your proposed solution to kernels then?Kernels violate the foundational principle of maintaining a single address space, because kernels are based upon a separate kernel address space.
This has been clocked... C outruns LISP 99% of the time...
How to make Lisp go faster then C (http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBoQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2 Fdownload%3Fdoi%3D10.1.1.142.1262%26rep%3Drep1%26t ype%3Dpdf&rct=j&q=Common%20Lisp%20as%20fast%20as%20C%20pdf&ei=iqySTsaIDYHjiALYi4XjBg&usg=AFQjCNHVW68xhEg1s9BQ3vSril2GqpSbAQ&sig2=drDWdx4IsKyzEl_UDQi6wA&cad=rja)
LISP is considered a sluggish language, even slower than Java.
Please read the above description of performance which refutes this ignorant conclusion.
Imperative, OOP is the most intuitive thing.1) Imperative programming is intuitive and it is an integral part of any useful program, but it should be integrated with functional programming
2) Mutable OOP as defined by C/Java/C# leads to bloat as you end up developing separate classes to deal with everything and its mutable nature makes concurrency / parallelism / distributed programming problematic (see Erlang for a good alternative and Rich Hickey's OOP explanation). Multimethods (http://clojure.org/multimethods) deal with the same task more effectively.
Critique of non-AI languages
Various programming languages have been produced over time to deal with things besides AI, for example, the scripting languages deal with scripting applications. C on the other hand, was invented with the specific goal of UNIX systems programming, and Java and C# were invented to work with their respective virtual machines. I will critique each of these languages here.
UNIX systems languages
The C programming language was developed between 1969 and 1973 by Dennis Ritchie and Brian Kernighan specifically to build UNIX. Nonetheless, the language is based upon flawed foundations, perhaps the most important of which is its lack of GC support which thereby prevents it from having idiomatic support for FP or concurrency / parallelism / distributed programming.
The D programming language (http://en.wikipedia.org/wiki/D_%28programming_language%29) was created by Walter Bright as the successor to C, hence it is name is the next letter in the Roman alphabet. The language adds much needed features like automated memory management and its removes bad features like the C preprocessor. Despite being a thorough improvement over C, D has failed to failed to approach C's popularity after over a decade.
This might make userland programming with C a bit more difficult, and make managed languages more appealing, but this is a major strength in unavoidable cases.
Manifest typing encourages good program structure and consistency of type usage, and allows programmers to have tight control over what types are used.
Most of us have seen films were a character turns off auto-pilot to drive manually. Well automation is incredibly advantageous, it saves work, and it can often produce better results then manually activity, there are some exceptional cases were you want to do things manually. These exceptional cases are an argument for optional manual controls, they are not an argument against automation itself, after all, one of the fundamental reasons we have constructed computers is to automate repetitive tasks.
The D programming language brings at least two new forms of automation to the table: automatic memory management, and automatic type inference. For example, the expression you described previously could be expressed in D:
auto x = 0; // uintAs such C is inferior relative to D. A good start at fixing this problem would be to restrict C to the kernel-space of MASOS's. This could leave us to use secure/portable GC language as the basis of user space (e.g Java/C#) which would lead to huge advantages such as transparent distrubition across the network. Unfortunately, the most popular user-space systems are based upon C/C++ today, e.g GNOME is C based, KDE is C++ based, and Cocoa is objective C based.
Yeah, how stupid to have a very useful feature that can let you customize the way you code and actually make things less cryptic...
I recommend that you read the explanation as to why the C preprocessor is deficient on the D programming language site:
http://www.digitalmars.com/d/2.0/pretod.html
Back when C was invented, compiler technology was primitive. Installing a text macro preprocessor onto the front end was a straightforward and easy way to add many powerful features. The increasing size & complexity of programs have illustrated that these features come with many inherent problems. D doesn't have a preprocessor; but D provides a more scalable means to solve the same problems.
What I believe is that C will eventually go the way of the dinosaurs. That chart actually supports the idea, as it shows C popularity in a technical downtrend. If that was a stock chart, I'd consider shorting the stock or buying put options against it.
C has been going up in its relative popularity and it is going to become number one again. If you combine C, C++, and Objective C all into one they trump all other programming languages hands down and they can account approximately 32% of the modern software industry. Consider the incredible rise in Objective C use:
http://www.tiobe.com/content/paperinfo/tpci/images/history_Objective-C.png
Would you be willing to bet on an incredible rise like that?
Duh, these are not language features... these are OS API features, all of which are available in C on any self-respecting OS...A language should have idiomatic support for concurrency and distribution within its core. It was definitely specified that C would not deal with concurrency when it was created in 1969, which has led to the separate concurrent C programming language (http://books.google.com/books/about/The_Concurrent_C_programming_language.html?hl=pt-BR&id=ZDRBv8U_bJ0C) and other attempts to address the languages flaws.
Because there are no strings... A string is just an abstraction of a character array. And manipulating a character array is quite simple.Manipulating "character arrays" in C is far from "simple" which is precisely why programming languages like Perl (http://en.wikipedia.org/wiki/Perl) and Awk (http://en.wikipedia.org/wiki/AWK) became popular tools on UNIX in the first place. Perl has regular expressions and other string handling features and it was marketed as a "text processing language." If C really had a good means of handling character arrays, then these alternative systems may never have arose on UNIX.
Again, this has nothing to do with the language but the standard libraries. Yes, it is a bit cryptic. No, it's not that hard to learn what they do.Early C implementations had a limited string size to fit their identifiers, which is the only reason we have such cryptic names as "isalnum." There is no reason this should've happened in the first place, let alone for us to continue to utilise this today.
Erm... so does LISP (don't you hate my using all caps for the name? ;))... This is so trivial and silly my initial response can only be a double-facepalm...The obfuscated code contest is made possible by all the poor design features in C like the use of the position sensitive ++ and -- operators and the cryptic names like "isalnum" that arose from an era of limited string space:
char*O=" <60>!?\\\n"_ doubIe[010]_ int0,int1 _ Iong=0 _ inIine(int eIse){int
O1O=!O _ l=!O;for(;O1O<010;++O1O)l+=(O1O[doubIe]*pow(eIse,O1O));return l;}int
main(int booI,char*eIse[]){int I=1,x=-*O;if(eIse){for(;I<010+1;I++)I[doubIe-1]
=booI>I?atof(I[eIse]):!O switch(*O)x++)abs(inIine(x))>Iong&&(Iong=abs(inIine(x
)));int1=Iong;main(-*O>>1,0);}else{if(booI<*O>>1){int0=int1;int1=int0-2*Iong/0
[O]switch(5[O]))putchar(x-*O?(int0>=inIine(x)&&do(1,x)do(0,true)do(0,false)
case(2,1)do(1,true)do(0,false)6[O]case(-3,6)do(0,false)6[O]-3[O]:do(1,false)
case(5,4)x?booI?0:6[O]:7[O])+*O:8[O]),x++;main(++booI,0);}}}
This is, yet again, another useful capability and flexibility. You didn't address the vital point in that quote which is that if you forget a break you can introduce a silent error. This had led people to spend countless hours debugging their programs, which is why there is a general conception that you should avoid the C switch statement.
An = operator is for assignment. A == operator is for comparison. Learn the difference, and you'll be just fine.
Its easy to know the difference, that isn't what is at issue here. If you accidentally leave off one of the symbols that will totally change the meaning of the expression. This can arise from a typing accident and since the expressions are so similar it will be hard to discover the error.
I fail to see how stdio.h and stdlib.h are a problem.Here are the specific points of contention:
stdio.h contradicts the single address space principle since it has MASOS IO.
stdlib.h contradicts the automated manual memory principle since it contains malloc and free.
The best reason not to use GC is because you don't want to -- plain and simple. Plenty programmers are smart enough to manage their own memory allocations/de-allocations and work with addresses.
The fact that some people are "smart enough" to do things manually is an argument for optional manual controls, not an argument against automation itself. Considering this, automatic memory management remains an advantageous feature.
Manual memory management is not that difficult, at least to me.There is a common expression that goes, "it is easier said then done". When you write really large programs that begin to use persistent data structures, and you subsequently spend many days/weeks/months debugging your code looking for memory management errors, then get back to me.
Let's take, for example, the issue of graphics programming. Graphics hardware is growing insanely fast and powerful, but we still aren't close to true photorealism. We have to squeeze every drop of performance out of our GPUs to push triangles and texels to the screen. The only way you're going to get that fast is through native drivers. Then you need to communicate with those drivers, so we have native technologies like DirectX and OpenGL. After that, you can use managed code like SlimDX, OpenTK or XNA. The managed code works fine and can write great games, but it wouldn't be so great without super fast hardware and native drivers.
Modern non-AI languages are not suitable for everything. This ends up creating a mess of contradictory languages (including, scripting languages) and frameworks.
VM systems languages
The Java programming language was developed in 1995 by James Gosling for Sun Microsystems corporation. Java resolves many of the earlier problems of C like the fact that it isn't garbage collected, its lack of basic security features, and its senseless preprocessor, but in reality Java isn't much of an improvement because its mutable OOP paradgim that it has introduced is immanently deficient. This is covered by Rich Hickey who explains why Lisp (specifically Clojure) isn't OOP like Java and C#:
Q: Clojure is not object-oriented, why not?
A: Well it's not object-oriented the way Java, C# or C++ is. That's not really the way you would structure things. It is in some ways a rejection of my heritage, as an object-oriented programmer. I think after having done it for two decades, I don't believe in it anymore. I just don't think it's the right way to start. It can help you organize your code, but it brings along with it some complexity that I have found in real systems always ends up biting you—and it is related to mutability. By default, an object-oriented program is a graph of mutable objects. That's a very, very difficult thing to think about, debug and keep running. I've worked on very big object-oriented systems, and you always essentially run into the problems related to that architecture. I think that even before you get to concurrency, there are complexity problems with mutable objects that basically affect every large object-oriented application. When you add in concurrency, those problems become much clearer.
So a functional approach was something that I had already started doing, even in programs I was writing in C#. For instance, there were parts of the national exit poll system that were very functional, even though it's a C# system, because the way to defend yourself against this complexity is to write in a more functional style with a lot more immutability. The problem is that it's not very idiomatic in C# or Java to do so. I wanted to make a language where it was—where the default was to do the right thing. When you needed mutability, there would be a good story about how to do that compatibly with concurrency. The C# programming language was created by Microsoft as a direct copy of Java, and since then they have made the language far superior to Java by adding improvements like first class functions and type inference. Nonetheless, it still has flawed anti-concurrent foundations.
Scripting languages
The problem with the scripting languages is that they are too domain specific, that is to say that they are designed to deal with scripting some sort of thing. For example, JavaScript is designed to script client-side applications, and and PHP is used to script server side applications, both of these languages also have a large set of problems, JavaScript was rushed to the market, and due to the practices of Microsoft and other companies, it has never recovered from that. (see your language sucks (http://wiki.theory.org/YourLanguageSucks) for more details)
Energy
"Green energy", like I said, has become a "holy grail" sort of thing. Everyone wants to find the winning technology, and it is actively being sought with enthusiasm.That is certainly what is propagated by the *capitalist media* - but the actual truth is not so simple. We have known for nearly a century that we should create an enernetic world smart-grid-system for the wired/wireless transmission of electric energy, e.g Nikola Tesla's Wardenclyffe Tower / Robert Metcalfe's smart grid.
Additionally, the capitalist plutocracy has mismanaged nuclear energy development. We have known for decades that we should utilise thorium-fission, but that option hasn't been effectively utilised due to the un-weaponisability of thorium relative to uranium, and the possibility of techno-deprecation of possessed uranium capital. Furthermore, the plutocracy has a long history of fusion technology / plasma tools mismanagement, e.g the assassination of Eugene Mallove shortly after purportedly sustaining a fusion reaction, the suppression of Farnsworth's fusor by the IRS, etc.
Ultimately we have a depleting type-0 energy supply (according to the Kardashev scale) consisting of uranium, coal, oil, and other non-renewable energy sources, and these are running out which will result in more dangerous procurement methods, which in will turn result in more accidents like the Deepwater horizon oil spill and the Fukushima accident, which will lead to social turbulence.