Posts Tagged ‘complexity’

How is AI used? What does it mean?

Thursday, December 24th, 2009

What is AI?

The term ‘AI’ can conjure wildly different (and usually far-fetched) ideas compared what is currently possible. Yet the AI we do today is a step along the way to the Speilberg and sci-fi image the mass media provides us.

To AI researchers ‘AI’ is commonly seen as a set of mathematical tools that can be programmed into computers to enable software to ‘learn’ in some fashion in order to enable it to be flexible, adaptive, and self-tuning. The scale it does any of these things would normally determine the complexity and computational burden of the task.

In all normal cases it’s a case of either:

– cheaper, better, faster, more reliable (eg engine management)
– impossible to do with manual encoding (eg vision)

A few examples:

Data Mining:

More sophisticated data mining uses ‘learning’ by data mining tools to discover key relationships between the inputs and the desired outputs – such as profitability relative to certain expenditures or management strategies. We want to find what key performance indicators really are… – not just the ones management ‘imagine’ they might be. Rather than guessing and writing code to mechanistically generate some output based on guessed approximations to the truth, AI can be used to learn the real relationships hidden in the data.

Currently we are working with financial transactions to identify performance metrics.

Artificial Vision:

Vision one of the most complex domains with a lot of active research. We humans are unaware of the complexity or difficulty due to our powerful natural image processing ability. Our visual system is plastic and constantly tuning to the world around us to enable us to see. About 60% of our brain is fired in understanding a new scene, which is an astronomical amount of computation compared to present day computers’ ability. This seemingly effortless task is requires us to work out how the streams of information coming through our eyes matches up, and what is meaningful, what is not, and to generate a 3D perception of the information from a 2D retina. There are no solutions that are manually coded that can do this type of thing. Machine learning is a key part of automatic machine vision, which is still in it’s relative infancy. We currently undertake Imaging research.

Natural Language Processing:

This is another active area of our research. As humans, we find understanding a news-feed very easy, given that we understand the context and background to the text. There is a compelling drive to enable computers to understand text. AI can be used to provide relationships between the text context and to automatically build and use taxonomies to store, sort, file and retrieve the text. Yahoo! Use AI tools to generate their menus and taxonomies automatically, which are machine sorted. This is a relatively basic use of AI.

More advanced ones require better understanding of context and content. We’re working to provide automatic learning and rating of financial and business news-feeds.

Engine Management:

A simple example would be the engine management system of a modern quality car. This has to adapt to the real and unique state of your engine all the time. In the past, the engine was tuned only by a garage. This was a terrible compromise in both engineering and settings that just got worse until the next time you could take it to the garage.

The adaptive versions are constantly tuning to cater for both instantaneous changes in load, humidity, temperature, fuel quality, and longer term wear & tear. The controller can ‘learn’ the best way to tune your engine for each operating condition you use the car. Thus overall the efficiency goes up, the engine lasts longer and it can compensate for wear and tear as well as notifying you that something is wrong while it still works. In the past I built an engine management system to optimally adjust petrol engines fuel efficiency.

Speech Recognition:

The speech recognition in your phone ‘learns’ to understand your voice. Initially we make more mistakes, and repeatedly ask new speakers, especially with strong accents, to repeat what they said. Once we are familiar with them, we don’t need to do this anymore.

The same applies to speech generation. We have done some work on speech applications.

Stock Price Prediction:

This is commonly researched by most financial institutional investors. We have also worked in this field with very good results in off-line testing and will be returning to it soon.

Special Effects:

In the movie industry the best animations with totally natural movement are generated by AI. Before AI was used to ‘discover’ the best models for motion, the human coded attempts (with 1000’s of human years of programming at vast cost) were obviously ugly and unnatural. Now in special effects, many animals and humans in dangerous action scenes are simulations based on AI solutions to natural motion.

Computer Games:

A common test bed for certain types of AI where the creatures learn your movements and capitalise on patterns. This makes for more interesting and challenging gameplay.

Adaptive Computing Infrastructure:

We work on adaptive computer systems that can manage themselves across failures without going down.

All these things have in common a need for the system to ‘learn’ and to self-optimise to achieve certain goals, like lower cost, faster, more accurate, more efficient etc.