buy gold and silver bullion

Monday, May 21, 2018

Artificial intelligence, or can machines think?

Artificial intelligence (AI) is seen as both a boon and a threat. It uses our personal data to influence our lives without us realising it. It is used by social media to draw our attention to things we are interested in buying, and by our tablets and computers to predict what we want to type (good). It facilitates targeting of voters to influence elections (bad, particularly if your side loses).

Perhaps the truth or otherwise of allegations such as electoral interference should be regarded in the light of the interests of their promotors. Politicians are always ready to accuse an opponent of being unscrupulous in his methods, including the use of AI to promote fake news, or influencing targeted voters in other ways. A cynic might argue that the political class wishes to retain control over propaganda by manipulating the traditional media he understands and is frightened AI will introduce black arts to his disadvantage. Whatever the influences behind the debate, there is no doubt that AI is propelling us into a new world, and we must learn to embrace it whether we like it or not.

To discuss it rationally, we should first define AI. Here is one definition sourced through a Google search (itself the result of AI):

“The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

This description is laced only with the potential benefits to us as individuals, giving us facilities we surely all desire. It offers us more efficient use of our time, increasing productivity. But another definition, which might ring alarm bells, is Merriam-Webster’s: “A branch of computer science dealing with the simulation of intelligent behaviour in computers. The capability of a machine to imitate intelligent human behaviour.”

Now we are imitating humans, particularly when we add in the ability of machines to learn and adapt themselves to new stimuli. Surely, this means machines are taking over jobs and even our ability to command. These are sensitive aspects of the debate over AI, and even the House of Lords has set up a select committee to report on it, which it did last week.[i] Other serious issues were also raised, such as who do we hold accountable for the development of algorithms, and the quality of the data being input.

This article is an attempt to put AI in perspective. It starts with a brief history, examines its capabilities and potential, and finally addresses the ultimate danger of AI according to its critics: the ability of AI and machine learning to replicate the human brain and thereby control us.
AI basics


AI has always been an integral part of computer development. As long ago as 1950, Alan Turing published a paper, Computing Machinery and Intelligence, which posed the question, “Can machines think?”.[i] It was the concept of a “Turing Test” that determined whether a machine has achieved true AI, and the term AI itself originated from this period. The following decade saw the establishment of major academic centres for AI in the US at MIT, Carnegie Mellon University, Stanford, and Edinburgh University in the UK.

The 1980s saw governments become involved, with Japan’s Fifth Generation project, followed by the UK Government launching the Alvey Programme to improve the competitiveness of UK information technology. This effort failed in its central objective, and the sheer complexity of programming for ever-increasing rule complexity led to a loss of government enthusiasm for funding AI development. In the US, the Defence Advanced Research Projects Agency also cut its spending on AI by one third.

However, in the late-1980s, the private sector began to develop AI for applications in stock market forecasting, data mining, and visual processing systems such as number plate recognition in traffic cameras. The neural method of filtering inputs through layers of processing nodes was developed to look for statistical and other patterns.

It was only since the turn of the century that the general public has become increasingly familiar with the term AI, following developments in deep learning using neural networks. More recently, deep learning, for example used for speech and image recognition, has been boosted by a combination of the growing availability of data to train systems, increasing processing power, and the development of more sophisticated algorithms. Cloud platforms now allow users to deploy AI without investing in extra hardware. And open-source development platforms have further lowered barriers to entry.

While the progress of AI since Turing’s original paper has been somewhat uneven, these new factors appear to promise an accelerating development of AI capabilities and applications in future. The implications for automation, the way we work, and the replacement of many human functions have raised concerns that appear to offset the benefits. There are also consequences for governments who fail to grasp the importance of this revolution and through public policy seek to restrict its potential. Then there is the question of data use and data ownership. I shall briefly address these issues before tackling the philosophical question as to whether AI and machine learning can ultimately pass the Turing test in the general sense.

- Source, Gold Money