Topic number one has to be Epistemology. If you don’t understand what knowledge, learning, experience, wisdom, intuition, salience, and intelligence are, then all your programming skills can’t prevent you from betting on the wrong horse; don’t trust your superiors and idols to tell you what horse to bet on since (with few exceptions) they didn’t study Epistemology either. Systems science comes second. These should be the foundations of serious AGI research, not logic, mathematics or even computer science. Once you know what needs to be done, the programming will be trivial.
In general, study Model Free Methods. These are used in the life sciences. Stay away from Perceptron style neural nets but DO study “Modern Connectionism”. Get comfortable with table lookup, graph theory, big data, and machine learning in general. Get comfortable with “letting go” don’t insist on control, on understanding the algorithms in detail, on repeatability – learn to use evolutionary computing such as GA and GP and other discovery based methods. Study Emergence and Emergent Robustness (an up and coming discipline) and Systems Biology. Don’t be afraid of the word “Holistic”. Read Schrödinger’s “What Is Life” (1946). Study Philosophy of Science so that you know the difference between Holism and Reductionism and understand their limitations in detail. Study how children learn from *reliable sources* that are based on actual experiments. Study enough Complexity Theory to understand what I and others mean by “Bizarre Systems”.
Avoid Logic, Bayesian logic, and Fuzzy logic – they are all logic, and hence Reductionist. Avoid databases and multi-agent systems. Don’t worry about “AI languages” including Lisp family; just learn any solid modern language like C or Java. Avoid Linguistics since grammars are Models and Models are Reductionist. Avoid Heuristics, since they are Instructionist and hence Reductionist.
And surprisingly to some, avoid the red herrings of Consciousness studies, topics like Qualia, Chinese Rooms, the Turing Test, or pretty much every major issue debated in 20th Century AI.
Is this too controversial? Watch my videos atand read . I also wrote a blog entry of advice for AI students on .
- Linear Algebra (for everything)
- Probability/Statistics (for any bayesian network/graphical model, including neural nets)
- Calculus (derivatives for gradients)
- Basic Algorithms (complexity comparison)
- Logic (both first order and propositional)
For reference, you can just browse through AIMA ( ) and learn whatever it covers as it is a fairly comprehensive survey of major subjects in AI.
* Approaches that are based upon logic, theoretical mathematics, formal grammar and ontologies
* Developing “genetic” algorithms that are partly inspired by evolutionary processes
* Statistics-based machine learning and ontologies
* Systems that take inspiration from the human brain (and e.g. try to mimic aspects of how the human neocortex works)
This is a simplistic generalisation, and it may therefore not be useful. It is of course possible to create hybrid systems that take inspiration from several (or all) of these camps. The point however, is that learning that is useful for one of these areas may not be useful for another. Some people might have a lot of use for learning about neuroscience – others may not. Some people might have a lot of use for learning about logic and formal grammar – others may not.
Here is a list of fields of knowledge (my statements about how widely they are used are guesses, but not guesses from out of the blue):
* Computer science (programming, databases, etc): Used by all
* Study of algorithms: Used by most
* Probability and statistics: Used by most
* Discrete mathematics: Used by most
* Linear algebra (matrices, multidimensional spaces, etc): Used by many
* Calculus: Used by some
* Neuroscience: Used by some
But really, with a basic understanding of computer science you should be good to go – especially if you have a talent for abstract and creative thinking. Underlying knowledge can be filled out as you go when you see the need.
I think the approaches that have the most hope for being a central in the creation of human-level AI will be inspired by the human neocortex. But in all honesty, I would advice people to not focus on developing AI as fast as possible, but rather focus on work that will increase safety and maximize the probability of a good outcome. AI friendliness is a theoretical study that some people are working on, but the number of people working on this is in the tens – not much when compared to the thousands of people working on AI as a whole.
One thing all people who work in AI should learn about is theory regarding AI friendliness and topics such as the intelligence-explotion-hypothesis. Those might seem like science-fictiony or silly thing to focus on, but the best knowledge and thinking available on the topic suggests that it isn’t. I would encourage you to read Superintelligence by Nick Bostrom, which is available as an audiobook, or at least watch this youtube-playlist:
The leading group doing theoretical work on AI friendliness is the Machine Intelligence Research Institute. Their reading list contains a lot that also will be of help in the general study of artificial intelligence:
Read the latest info you can find on cognitive science and neuroscience, add a dash of dynamical systems theory, evolutionary programming, machine learning, neural networks (esp. dynamical systems properties thereof). I disagree with Monica in terms of logic, but try to learn a logic which includes rules for intuitive learning — eg abduction and induction.
Probability and Statistics as well as linear (and nonlinear!) algebra are especially important as well.
But most important of all: have an open mind.
This is perhaps the wrong way to go about it, though. You don’t need much more than a basic level of understanding in any particular other field to start learning AI. It’s not like calculus which is extremely dependent on algebra. Just dive right in and pick up the details of other fields as you go.
- Graph Theory
- Multi Agent Systems (if not included in AI)
- Neural Nets (if not included in AI)
- Genetic Algorithms(if not included in AI)
- Search Algorithms with and without heuristics
I disagree with Scott’s (seeming) approach to rely on logic programming as it isn’t flexible enough.
I also disagree with Monica’s (seeming) approach to rely on simple freerunning and selfdevelopment as it has no example in Nature and would lack a proper direction. Even humans have hardcoded aspects of their behaviour. Such a “let it run and see what shows up” project would be an EXPENSIVE dicethrow with a million possibilities to fail an just some possibilities to succeed.
My AI would be a mixture of both concepts. A static system based on logic and rules with a core that exchanges the modules step by step with improved connectionistic parts optimized by GA and mutated by stochastic algorithms, like the following schema.
1 Primary Persona with the task to improve and exchange all OTHER parts.
3 or more Secondary Personas with the task to improve the primary persona in a collective or democratic exchange with each other.
together they should later build the personality of the AI.
All this modules should start as hardcoded and evolve connectionistic.
Depending on the planned direction the AI should be given other Modules for the control of sensors, motivators,database connection, communication-modules or similar things. These modules should be hardcoded.
The AI should learn to use this hardcoded parts with “AI readable output” like a child uses a hardcoded pocket calculator.
Thats why I think coding sophisticated AI needs skills in classic AI ( logic, search, heuristic ) as in C(omputational)I(ntelligence) (connectionism,GA,fuzzy-logic)
Although, I think that in specific parts and challanges, holistic thinking is of importance, I oppose Monicas crusade against Reductionism. Try to teach maths in a holistic approach. Try to teach Logics in a holistic approach. Try to teach reason in a holistic approach or try it with decissions making.
The Step from Less and More to ZERO and ONE took us many thousand years.
Reductionistic Thinking helped us in building and controlling machines, it gave us the scientific method.
What would the sense be in creating machines WITHOUT this possibility.
I would say that having intelligence is important.
If you have intelligence, you can discover that modern AI, IBM’s Watson, is far more intelligent than you or I. Watson is gentler than Mother Theresa. Good, yes?
So, is math important? If you want to focus on the mathematics of machine learning, yes. Otherwise, no. I concentrate on cognitive apps. That is how I will make my money. Apps you can talk to and that talk back, intelligently. Look for an app called Limitless! in another couple of months.
Skills? That’s hard. Humility. Patience. The ability to hold a thought for long enough to understand things you don’t currently understand. The ability to learn. The ability to distinguish opinion from fact.
The last one is the most important.