Science in the News

Opening the lines of communication between research scientists and the wider community.

first artificial intelligence research paper

  • SITN Facebook Page
  • SITN Twitter Feed
  • SITN Instagram Page
  • SITN Lectures on YouTube
  • SITN Podcast on SoundCloud
  • Subscribe to the SITN Mailing List
  • SITN Website RSS Feed

first artificial intelligence research paper

The History of Artificial Intelligence

by Rockwell Anyoha

Can Machines Think?

In the first half of the 20 th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis . By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Making the Pursuit Possible

Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, computers could be told what to do but couldn’t remember what they did. Second, computing was extremely expensive . In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing.

The Conference that Started it All

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist . The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy, imagining a great collaborative effort, brought together top researchers from various fields for an open ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthy’s expectations; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research.

Roller Coaster of Success and Setbacks

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.

Anyoha SITN Figure 2 AI timeline

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldn’t store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy at the time, stated that “computers were still millions of times too weak to exhibit intelligence.”  As patience dwindled so did the funding, and research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it could be argued that the indirect effects of the FGCP inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue , a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows . This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair game as evidenced by Kismet , a robot developed by Cynthia Breazeal that could recognize and display emotions.

Time Heals all Wounds

We haven’t gotten any smarter about how we are coding artificial intelligence, so what changed? It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem. Moore’s Law , which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Ke Jie, only a few months ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

Artificial Intelligence is Everywhere

We now live in the age of “ big data ,” an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, banking , marketing , and entertainment . We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. There may be evidence that Moore’s law is slowing down a tad, but the increase in data certainly hasn’t lost any momentum . Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, it’s already underway. I can’t remember the last time I called a company and directly spoke with a human. These days, machines are even calling me! One could imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages being translated in real time. We can also expect to see driverless cars on the road in the next twenty years (and that is conservative). In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems inconceivable that this would be accomplished in the next 50 years. Even if the capability is there, the ethical questions would serve as a strong barrier against fruition. When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. His current project employs the use of machine learning to model animal behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence .

For more information:

Brief Timeline of AI

Complete Historical Overview

Dartmouth Summer Research Project on Artificial Intelligence

Future of AI

Discussion on Future Ethical Challenges Facing AI

Detailed Review of Ethics of AI

Share this:

302 thoughts on “ The History of Artificial Intelligence ”

During the Times Heals all wounds you say that “holding us back 30 years ago was no longer a problem. , which estimates that the ” the , is supposed to have the words Moore’s Law infront of it the article was a great thing though thank you for the information and apologies for being nitpicky.

yeah noticed lots of small mistakes but overall great informational article

can i use this article for reasearch?

this helped me a lot with my research report, thank you.

np, it took me a while to publish this im glad someone found it helpful.

This website helped me a lot with my research report about the invention of AI, thank you!

This website actually helped me quite a bit on my Research Paper! Thanks! 😀

helped me too, it’s an amazing website 🙂

iam a new student of AI and i need your help during my research reports/thesis

am from Uganda +256 East Africa.

Intelligence is not common to everyone. If you are smart enough to figure out problems and solve without much stress, then you are intelligent. AI is still no match for human intelligence but its quite unbelievable hearing scientists create supercomputers that operate faster than the human brain.

Agree with you. Computers can think, count, solve complex mathematical problems and equations, but compared to human intelligence,AI is like a small child. It can work only in programmed situations; it cannot understand what improvisation and creativity are. At least for now. Recently I read an article of a company that develops innovative technologies ( ), even they assure that artificial intelligence is far from ours.

Thanks for the article. According to the byline, the author studies “the use of machine learning to model animal behavior”, it would be great, if there were some examples in the text. It is interesting how such work is carried out when it comes to animals. Maybe any more articles or videos to have a look?))

Thanks, Rockwell! It’s really interesting to see AI-development from the retrospective touch of view. For me, is very interesting to investigate the use of AI in different areas. I want to add some articles, where people can find interesting information about AI to your list:

Thank you for the links Mariia!

i love you from france

I need help for me english work on history of artificial intelligence

Hi! Maybe this blog will help you – , there are more info about kinds of artificial intelligence. Just interesting to read.

Thanks for the article explaining the history of Artificial Intelligence from head to toe. But there is a mistake in it. “This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Kie Je, only a few months ago.” The name should be “Ke Jie” not “Kie Je”. Hope this can be corrected soon.

Fixed – thank you!

Great article. What are the units on the y-axis of the timeline graphic?

In the section The Future it states “Even if the capability is there, the ethically would serve as a strong barrier against fruition. ” Maybe switch ‘ethically’ to ‘ethical questions’ or ‘ethical feasibility.’ Sorry for nit picking, otherwise great article.

Good catch – fixed. Thank you!

What was used before artificial intelligence?

Thanks, Rockwell Anyoha! Interesting information. I want to add some article, where people can find interesting information about AI to your list:

The information is great, it’s been a long time since last time I read a long story. I have more understanding about AI. Thanks.

Hi Rockwell, thank you for you post! I saw in comments that people sharing information about machine learning but your article about AI. So I’ve decided to share with this post about the difference between AI, Machine Learning and Data Science.

It’s really interesting to see AI-development from the retrospective touch of view. For me, is very interesting to investigate the use of AI in different areas. I want to add some articles, where people can find interesting information about AI to your list:

Great article, thank you.

I saw you erased my last comment i am suing all of you for being sexist

Thanks for sharing your experience! Really enjoyed the reading. These days, AI is not new social media trends as well. Having become popular in 2017, they still enjoy the demand. Also, I would like to share with you an interesting and up-to-date software company blog: There is always topical and checked information. Take a look!

That was a great read indeed. Very informative. Thanks for sharing it. 🙂

The evolution of AI is so impactful that soon it will be like we are living in a sci-fi movie! Maybe we’re already doing so to some extent! AI evolution in eCommerce ( ) is already creating a lot of buzz in the market. You can’t even forget about the healthcare sector right?

Now, when AI is combined with other futuristic technologies like Machine Learning, Deep Learning, Neural Network, IoT – wonders are happening and more to come yet!

Hello, Rockwell!

Thank you for sharing this information. From 2017 and until now AI becomes a real trend. This year also back in-game AR. A technology that, with the help of your devices, allows you to see the digital version of different objects overlaid on the real background. The possibilities of AR are endless. It can be implemented in web design as a virtual dressing room for the retail stores when the users can try out products online. Or, this can be used for interior design ideas when you virtually try a piece of furniture or arts in your home to see whether it’ll match. All users need to have is just a web or smartphone camera. I’d like to share with you and your readers more read about AI, AR, and web design trends 2020: I hope you will like it)

Thank you for your great content! Artificial Intelligence has taken the world by a storm. Machine learning and AI have become an essential part of our lives, from “Hey Siri” entering with us on live chat to self-driving cars technology. In fact, the growth of AI should more than double revenue to become a USD 12.5 billion industry. If you want to know more, don’t hesitate to see:

Good stuff, was useful, thanks Rockwell!)

I am a fifth grader, and I found this topic to be so interesting I decided to do a research project on it and at first I had trouble on finding sources, but once I found this I loved it! Everything in this passage was useful and I also absorbed a lot from it. I would like to say thank you very much to the author for helping me make my research project a success and i look forward to absorbing more into the topic through articles that Rockwell Anyoha hopefully makes. Again thank you very much! With all the information this article provided I was able to create a college leveled essay about Artificial Intelligence!

So glad you enjoyed it! Keep up the awesome work:)

Dear Rockwell, thank you for your interesting overview!! One question: What does the Y-achsis in your graphic stand for? Thanks for answering! C

Nice to read this content thanks for sharing

Nice article. I have read comments citing more articles. There is much more to understand about AI. Thanks

Punctuation typo in “Time Heals All Wounds” section: “`30 years ago was no longer a problem. , which estimates`”

You’re so cool! I do not believe I’ve read anything like that before. So great to discover somebody with some original thoughts on this subject matter. Seriously.. many thanks for starting this up. This web site is one thing that’s needed on the web, someone with some originality!

Totally agree! Especially knowing that artificial intelligence is one of the biggest trends these days. If you like discovering new things, here are a few more major trends . It is so exciting to know that technology gets improved over time and changes our lives for the better. Hope to be useful 😉

Wonder what will happen when AI and the human brain can connect. We expect AI to be one of the most emerging technology trend in 2020:

P.S. Poor Gary Kasparov

A speed of the actions, performed by this kind of AI, is the same as the human brain, but the quality of tasks is much higher. It can be compared to the fact of how people are higher by their intelligence from the animals. Both are smart, by it is clear who is able to do more, in the shortest period of time, and with higher quality, in the result. This article is food for thought, by no means.

Good Article and very useful .

science is trying so much by artificial intelligence but still not succeed. but maybe one day they can. thanks for your post

A ery good article that explains the history of AI very well. Would have liked a little bit about the history of AI in healthcare so far, to prepare us for the courses ahead.

Thank you Rockwell, for the retrospective to Artificial Intelligence. Now machine learning and AI development is a part of the daily work of software engineers for so many different industries

Thanks for the great article!!! Artificial intelligence advances extremely fast, bringing the vast number of technological approaches into life. I would also like to share some interesting article covering this topic:

Thank you for the fascinating retrospective of Artificial Intelligence. Now machine learning and AI development are part of the daily work of software engineers in many industries. I thought it might also be interesting to add a practical case study as an example of an AI-based business solution and the value it can create – .

Thanks for the fantastic article. Would it be okay if I translate this into Korean and share your article ? Your original post link will be of course added!!

A great piece of information. It takes a while to go through this article, but it is very resourceful. Thanks for sharing. Helped a lot.

A very good article that explains the history of AI very well. Would have liked a little about the history of AI in healthcare so far, to prepare us for the courses ahead. Check more information here

Good things come to people who wait, but better things come to those who go out and get them (Good luck to all Harvard undergraduate students)

Amazing history of AI, We know that Artificial Intelligence is a computer science that develops programs to mimic human intelligence. The level of intelligence may range from recognizing patterns in data to deriving insights for problem solving.

This was extremely helpful with research , thank you .

A great piece of information. And a lot more on the comments. This helps me some get some more ideas for my next article on Artificial Intelligence. Thanks for sharing. It helped a lot.

I totally agree with Peter Lee. This is an excellent blog on AI history. Looking forward to reading such articles ahead to get more inspiration and the knowledge shared.

Lovely and very informative article! Good job. Btw, here you can find additional info in this article –

Wow, very interesting post and comments!)

Very good information onthe history and progress of AI

“The History of Artificial Intelligence” wow nice title and article also, thanks for this awesome tutorial. you give the best article to your readers.

I just read this post and found it really informative for my queries. Thank you for this post.

This website actually helped me quite a bit on my Research Paper! Thanks!

Interesting article! AI is becoming increasingly important for mobile app development companies.

Wonderful information about the AI, with the help of artificial intelligence the scientists are making the robots that will do behave like humans ,its a great achievement for us ,thanks for sharing this important achievement with us.

its a very fantastic post , keep sharing

Hello Rockwell Anyoha Thanks for sharing a good information related to AI and included health data and best transparency

Hello! This was an awesome article and histogram.

I hope that you continue to expand more and include information about the use of simulation in the AI process and the General Problem Solver in terms of AI advancement. Those two topics will help the readers deeply understand two important foundational concepts behind how AI came to be.

I appreciate the shared links from everyone too. Great article. Thank you

Very nice content about artificial intelligence. I really like. Let’s continue. Thanks Rockwell.

Machines can work as they are programmed, but Yes, now with advancement of technology programming can made that machine react according to circumstances, most probably which means they can think, but it is just program made by humans.

It is great information about the history of Artificial Intelligence and it would be helpful for the beginner who wanna make their career in Information Technology. But It could be more interesting if you tell about how to implement it how to make our life more comfortable. but overall thanks for sharing this.

A web application or web app is a client-server software application in which the client (or user interface) runs in a web browser. or it is an application program that is stored on a remote server and delivered over the Internet through a browser interface.

Nice article about the AI, you can also checkout how AI can empower healthcare industries. AI in healthcare

Very interesting article. Thank you

One of the most prominent examples of using AI in business is Amazon. It is known not only for its voice-controlled virtual assistant Alexa but also for projects called Amazon Prime and Amazon Go. Amazon Prime’s smart robots deliver items from the warehouse to people’s doorsteps the same day. Check my new article data science vs machine learning vs artificial intelligence

Such a good topic for the times, can’t learn enough. As a tech person, I am always looking to expand my horizon, but nowadays there are unlimited possibilities with all the services offered by the cloud providers. I was just reading this article – about web services that offer machine learning and AI.

Lovely and very informative article! Good job. Btw, here you can find additional info in this article –

It is a great content of AI and I must say this provides a lot of things regarding all I need to improve my knowledge of AI. The language is easy to understand.

Very nice content about artificial intelligence. I really like. Let’s continue. Thanks Rockwell.

Thanks for sharing this post, is a very helpful article. Artificial Intelligence is a great article! Information nicely explained on truth fictions stances beliefs about Artificial Intelligence. Please keep sharing.

Thanks for sharing your experience! Really enjoyed the reading. These days, AI is not new social media trends as well. Having become popular in 2017, they still enjoy the demand. Also, I would like to share with you an interesting and up-to-date software company blog: There is always topical and checked information. Take a look!

Hi Rockwell, thank you for your brilliant post! I saw in comments that people sharing information about machine learning but your article about AI. So I’ve decided to share a post about the key differences between Natural Language Processing and Text Mining. I am sure many will be interested in reading this:

I just would like to offer you a huge thumbs up for your great information you have got right here on this post. Thanks for the information

The blog is really helpful

Hi! Article was quite catchy and interesting, huge thumbs up for information in it. Maybe in the future we all suffer from terrible consequences of AI usage like in some science fiction movies, but for now I think AI can be really helpful to automate routine or dangerous work. From my point of view nowadays it is always necessary to have a good ai programmer in your development team, if you are working in any IT sphere.

This is very interesting. I heard that there are 4 types of AI: Reactive Machines, Limited Memory, Theory of Mind, and Self Aware. But different scientists think in different ways. By the way, this is a very promising area and a lot of specialists work there now. I also visited this blog of app development company cause they share a lot of interesting info there. I think about app development every day, cause this is a part of my job, but AI is also very interesting for me.

I was going through this article and I couldn’t believe that this was written in 2017. I think it is pretty much useful even in 2021 when AI is yet to take over all of our problems! Website Development Companies should start working on something that would utilize the AI and get users a better experience altogether!

This is an amazing blog. Your blog is really good and your blog has always good thank you for information.

Hi Rockwell, thank you for your article! When can I find out more?

Very good article! Thank you

Very good article! Thank you!

I saw in comments that people sharing information about machine learning but your article about AI. So I’ve decided to share a post about the key differences between Natural Language Processing and Text Mining. I am sure many will be interested in reading this

Have to really admit it cleared my doubts,thanks

Very nice article! Thanks for sharing information of artificial intelligence!

Hey Rockwell, After checking your article and comment section I noticed that people are sharing information about AI. So I’ve also decided to share an article related to it.

Such a good topic for the times, can’t learn enough. As a tech person, I am always looking to expand my horizon, but nowadays there are unlimited possibilities with all the services offered by the cloud providers. I was just reading this article about web services and ecommerce

An extraordinary piece that reveals genuinely necessary insight into arising subjects like AI and ML development and its effect on business as there are numerous new subtleties you posted here. Now and then it isn’t so natural to assemble a Mobile Application advancement without custom information; here you need legitimate improvement abilities and experienced Top AI ML development company. Be that as it may, the subtleties you notice here would be a lot of accommodating for the Startup program. Here is one more first rate arrangement supplier “X-Byte Enterprise Solutions” who render achievable and solid answers for worldwide customers.

Know more here: Top AI ML development company

Wow, excellent post… This is really a great article and a great read for me. It’s my first visit to your blog and I have found it so useful and informative specially this article.

I think it is pretty much useful even in 2021 when AI is yet to take over all of our problems! Website Development Companies should start working on something that would utilize the AI and get users a better experience altogether!

Stupendous Post.. Actually I am searching for something else but fortunately getting your post. Yes there is no doubt, AI language is looking like the next big thing. There are many top class offshore software development companies available in the marker where you can hire AI/ML Developer as per your requirements. Awaiting for more post like this…

Very interesting topic about Artificial Intelligence. It’s crazy how far we’ve become in the last 20 years where AI was almost unheard of and now you can use it for customer service chat bots, smart phones and it is embedded in so much app development at this point.

I just hope it doesn’t get out of hand once AI gets smarter and used for military purposes.

1>AI is not some technology; it’s a concept that encapsulates many programming languages, algorithms(algorithms are steps to achieve some task), and technologies.

2>AI is mainly Data and Algorithm; You collect data, apply the algorithm to data and make an intelligent decision.

3>Programming language and tools to collect data and apply algorithms on it to generate an intelligent decision.

for more details please read 👇👇👇👇

Thank you very much for the helpful article. Thanks to her, I learned about the new functions of the web development

This is really interesting. Very useful information has been conveyed in this article! If you are still wondering how much will it cost to develop AI-based applications? then visit -

Very nice.. we also wrote about use of artificial intelligence in ecommerece recently

This is an informative post. Nowadays people interested in this sector. Artificial Intelligence does a vital role in the recent past.

Hey, That was a great read, Indeed! Thank you for the information. It has everything one can look up for.

Thank you very much for the information very interesting. I would also like to share an article about how much developers earn in all regions of the world and what factors affect this. More details can be found here –

I think that robots with deadly Force should only be operated by human law enforcement

Well explained and detailed guide about the history of artificial intelligence. I am very surprised to read every aspect of artificial intelligence. Thank you for covering the best guide about the history and cover each and every point in a detailed way. Great Efforts! Keep it up with the good post and guides.

Thank you very much for the information very interesting. I would also like to share an article about how much developers earn in all regions of the world and what factors affect this. More details can be found here –

Such a good topic for the times, can’t learn enough. As a tech person, I am always looking to expand my horizon, but nowadays there are unlimited possibilities with all the services offered by the cloud providers. I was just reading this article about angular

Awesome stuff. I always enjoy your writing. 🙂

Useful article!

Thanks for sharing your words.

Yes, machines can think. A great example is rank brain one of the core search engine algorithms by Google.

In the future machine will be work as natural human with a combination of machine learning and artificial intellienge.

Check out my blog also: Amazon ERC wearable technology trends largest gaming companies

Thanks for sharing your words. Yes, machines can think. A great example is rank brain one of the core search engine algorithms by Google. In the future machine will be work as a natural human with a combination of machine learning and artificial intelligence.

You can read more blogs on machine learning and AI. Here is my blog:

Thanks for sharing the amazing blog. I read it twice because of useful information. If anyone wants to know about the role of The Native KND please visit the website Digital Marketing check out this website to know more.

After reading your article I was amazed. I know that you explain it very well. And I hope that other readers will also experience how I feel after reading your article.

Thank you, it was a great read. I would recommend your readers also to read about How to Build an AI App

I would really like to catch these times, when I be able to see the sentient robot in real life. Rockwell, you wrote a wonderful work. You should publish your article on facebook, adding followers with to more people read the post too.

Thank you, it was a great read. Check more

Hey! This is my first visit to your blog! We are a team of volunteers and starting a new project in a community in the same niche. Your blog provided us beneficial information to work on. You have done a marvellous job!

Hey! This is my first visit to your blog! We are a team of volunteers and starting a new project in a community in the same niche. Your blog provided us beneficial information to work on. Please check mine

Good Articles with in-depth knowledge! Thanks for updating in 2021.

Enjoyed reading this article. I had done a presentation on Ai in radiology and feel proud to recognize and relate to what has been written. Enjoyed the flow of the subject. I think the time has come to talk about ethics in AI in the health care as data is so vastly used for research and trials.

This was a great read. Thanks, author for writing such an amazing piece of article. Indeed I agree with the fact that Artificial Intelligence is affecting various sectors such as banking, hospitality and entertainment sector. AI was in and out of limelight in past but I do think AI will only flourish in future and never go out of style!

The website is looking bit flashy and it catches the visitors eyes. Design is pretty simple and a good user friendly interface.

This was a very interesting read. Artificial Intelligence really has a vast number of applications as seen in this blog-

Thanks for sharing this post.

Thanks for sharing this posy on the history of the artificial intelligence! There are many changes happening in the technology sector! I have learned a lot of things from this post and this will be useful for the students like me!

hi < please could you clarify the 'y' axis in your timeline. It would help me better understand the chart. Thanks.

It is a great pleasure to read your message. It’s full of information I’m looking for and love to post a comment that says “The content of your post is amazing”. Excellent work.

Awesome! This helped me to collect the History of AI

Driverless cars on the road within 20 years… or 5 :-). Good example of the accelerated pace of tech now.

Nice article and well-written. It’s helpful for me and informative content. I appreciate your efforts on this blog. Thanks for sharing this blog with us.

Thanks for sharing wonderful post. I was checking continuously your blog and I’m impressed! Extremely very useful information. I was looking for this particular info for a very long time.

Absolutely, AI has become the most powerful weapon nowadays. Very helpful and unique content. keep it up!

Thank you for sharing this great article about artificial intelligence. I think you should share this post on your twitter page and get some likes and re-tweets from this for your post

Nice post! Thank you

Thank you for this post!

I definitely enjoying every little bit of it. It is a great website and nice share. I want to thank you. Good job! You guys do a great blog, and have some great contents. Keep up the good

Good content. I really liked this blog. The way of your presentation is nice. I bookmark your site for further blogs. Keep sharing more.

Well-written and informative content. Very interesting to read this article. I learnt new things about AI in this blog. Thanks for sharing with us. Keep sharing more articles, I bookmark your site for further articles.

You have mentioned every aspects of the subject through this article, can you please write about “ accountants for dentists “?

Very nice write-up!! The level of engagement in your content made me stick to your blog post. Exploration and looking for the opportunity can be a way toward growth, reach the top now. Will be waiting for your next post!!! You have mentioned very useful points in the blog. Reach a renowned mobile app development company for trending ideas and strategies.

Thanks for the insight. blog

thanks for sharing this has good content ,keep posting about datascience and ai.

Thanks for sharing with us. Keep sharing more articles, I bookmark your site for further articles.

I think this is among the most valuable information for me. The way you have explained everything has helped a beginner like me to learn the important aspects of the technology.

Right, Artificial intelligence is the fastest-growing technology. Artificial intelligence technology is used to train robotics with real-world data. Robotics can be manipulated using real-world data. The more data a robot uses, the better it will perform. Artificial intelligence’s purpose is to automate tasks performed by humans. Robots can now perform these tasks. Learn here How to Integrate Chatbots Into Customer Service

This blog is beneficial as the blog has given me numerous ideas for my next blog, along with practical examples. There is a website that posts blogs in the same category, which includes insightful information for your blogs.

Good Insights about Artificial Intelligence

This was a fantastic book to read. Thank you, author, for producing such a fantastic essay. I agree that Artificial Intelligence is affecting a variety of industries, including finance, hospitality, and entertainment. AI has been in and out of the spotlight in the past, but I believe it will continue to thrive in the future and never go out of style!

Loved all that you have shared in this article. After a lot of research, finally I found what I was looking for. Must say you.

I have thought so many times of entering the blogging world as I love reading them. I think I finally have the courage to give it a try. Thank you so much for all of the ideas!

Very nice content about artificial intelligence. I really like. Thanks for sharing

Thank you for helping me with my research on the invention of AI.

George Washington was the first President of a country that loved democracy, and the Declaration of Independence was signed in the 1776 American revolution. Both of these occurrences are well-known in the course of American history.

Awesome post, thanks for sharing.

Thanks for sharing!

I find this blog very useful as it has provided me with many ideas and practical examples for my next blog. Read more Artificial Intelligence blog-

Great article, thank you. I will recommend your readers to read about How to make an app like instagram

Thank you for sharing this great article. I’ve been visiting your site on a regular basis and I’m amazed! This is very beneficial information. I had been searching for this specific piece of information for quite some time.

I’ve read your post, it’s great, it helped me a lot. I hope you will post more.

Great article!

Thank you very much for sharing such great infomation very helpful article. can you please tell us about lowearnings

Very nice article if you would like

Thank you for sharing this great article about artificial intelligence. I think you should share this post on your twitter page and get some likes and re-tweets from this

Concept of Artificial Intelligence is still uncovered and by explaining every now and then of AI you have made the concept clear…One of the nicest blog that you have shared…Keep posting

Hello.. very nice topic.. I have enjoyed a lot also learn some new from your site.. I hope you will carry on and give me a space to connect with you for a long.. Thank you very much .. I will read it again .. Thank you

Hmm , very interesting thoughts!

thank u for sharing this article

Thanks for sharing your article post.

The article which is provided by you is very nice and understandable

Thank you for spreading the word about this excellent essay about artificial intelligence.

Thanks on your marvelous posting! I quite enjoyed reading it, you’re a great author.I will make sure to bookmark your blog and will often come back in the foreseeable future. I want to encourage you to definitely continue your great posts, have a nice morning!

Hello! This is my first visit to your blog! We are a team of volunteers and starting a new initiative in a community in the same niche. Your blog provided us useful information to work on. You have done a outstanding job!

Hey Rockwell, That was really a good read about AI. Being a tech guy and AI geek I was reading this article: and thinking about the scope of AI along with the Salesforce to make customer relationship management more smarter and effective. Looking for your views on that!

It is really a good read! As an AI geek & tech guy I was reading this article: and thinking about the scope of integration of AI with Salesforce. What do you think?

Appsinvo is a leading and dignified mobile app development company based in India, We develop apps for any kind of requirements and budgets and we help many businesses

Its an amazing website, I really enjoy reading your articles.

Thanks to the author for a terrific article! Indeed, artificial intelligence is now used in all fields. As a continuation of your article, I found information at

Thanks for the great article. In addition to magento development , we and our company are also engaged in the study of Artificial Intelligence.

Thanks for your marvelous posting! I really enjoyed reading it, you could be a great author. I will be sure to bookmark your blog and definitely will come back in the foreseeable future. I want to encourage you to continue your great posts, have a nice afternoon!

Hi Rockwell, for sharing this valuable piece of information. It provides excellent details about AI technology; I would like to share my piece of writing as well, i.e., Artificial Intelligence and its role in Healthcare.

thanks for sharing very useful and keep sharing 

Hi there, I found your website via Google while searching for a related topic, your website came up, it looks great. I have bookmarked it in my google bookmarks.

Thanks for this amazing post.

I like the metaphors you use in this blog post. Not just that, it is actually fully factual and has info I’ve never heard of before.

AI is only at the beginning of its development, but still covers quite a number of modern needs, especially in healthcare.

There are some use cases in modern healthcare: . Still, if you wouldn’t be impressed by them, just wait for 5 years or so, AI capabilities grow exponentially. You’d be amazed by looking at what machines would do for you in 2026.

Those students are looking for the best IAS coaching in Kolkata to fulfill their IAS dream. Then this right article is for you.

AI is here to stay for very long, I have seen an increase in demand of AI mobile apps. I have made the list of top react development companies and most of these companies are working on AI app development projects.

Wow, What an Excellent post. I really found this to much informative. It is what I was searching for.

I find it very interesting, thank you!

This was a very interesting read. Artificial Intelligence really has a vast number of applications

Very educating story, saved your site for hopes to read more!

This website helped me a lot with my research report about the invention of AI, thank you! You have done an outstanding job!

Thanks for sharing the Information Keep sharing like this.

It’s really interesting to see AI-development from the retrospective touch of view. For me, is very interesting to investigate the use of AI in different areas.

Amazing article! This article had provided each and every aspects of AI. Its greaat aarticle.

I enjoy reading your blog! Thank you for writing such an informative post. Excellent and appreciative blog. Continue to update.

Nice post, thanks for sharing this valuable post. Keep posting

Great post I would like to thank you for the efforts you have made in writing this interesting and knowledgeable article.

On top of AI, a brief history of Data Storage is also worth mentioning –

Hi Rockwell, for sharing this useful piece of info. It gives excellent information about AI technology. I’d want to present my own piece of writing, i.e., Artificial Intelligence and its use in Healthcare through custom software development methodology.

Very useful and informational article, it helped me alot for writing my thesis paper. Thanks for sharing such great piece of information.

This is the perfect blog for anyone who would like to understand this topic. You realize a whole lot its almost tough to argue with you (not that I really would want to…HaHa). You certainly put a new spin on a subject that’s been discussed for a long time. Wonderful stuff, just excellent!

Really impressive & helpful blog for me, I was searching this kind of information from a long time. Thanks for help me.

This is an exceptionally decent post.

This a very informative blog and i really got to understand the further scope of robots in near future. I thus wanted to share more information about the history of collaborative robots mentioned in the article below.

Great post! Thank you for sharing valuable information here.

Thank you very much for your fantastic post! I had a great time reading it; you have the potential to be a fantastic author. AI is the future for sure!

excellent publish, very informative. I ponder why the other specialists of this sector don’t notice this. You should proceed your writing. I am confident, you’ve a great readers’ base already!

Very interesting. Unfortunately there is no share button for G+, otherwise i would gladly share your article in my linkedin or G+.

Thanks for sharing . It is very useful

This is a very nice one and gives in-depth information. I am really happy with the quality and presentation of the article. I’d really like to appreciate the efforts you get with writing this post. Thanks for sharing.

Awesome blog. I enjoyed reading your articles. This is truly a great read for me. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work!

Thanks for posting this article. I don’t believe machines can think or gain consciousness but never know maybe some time in the future. Our company also studies and develops eCommerce websites with artificial intelligence tools to help our network stores .

Thanks for sharing such a good information about ai

Thank you for sharing this blog. It was so informative and helpful SEO blog.

Nice blog author. Thank you. Keep it up.

Great Articles!

I am glad to see this important information. Thank you very much to share with us.

Thanks for sharing a better idea. Please upload more posts in different topics. Awesome post!

Thank you very much for the information very interesting. I would also like to share an article about the UX trends and predictions for 2022 when it comes to developing new apps. More details can be found here

Thanks for sharing the information. The rate at which AI is growing, it seems even sky is not the limit! As this has already touched our lives in more ways than we think. And application development is not an exception either, in fact, AI has brought about many changes the way developers see the applications. So let’s check what AI is all about and what potential it has in mobile application development-

I read your blog, I found it very helpful, I am grateful to you for that. But I wanted to know that you have told the complete history, so what happened in this (Iceman) attack, because it was a cyberattack of 2007 which was very dangerous, can you tell me this? And thank you, Enjoyed reading your blog. And got a lot to learn as well.

Thank you Visit my Website:

Thanks for sharing such a great content. I hope your audience also likes to see – How AI Is Empowering The Fintech Industry –

thank you very much for the infarmation very intresting.In fact, the growth of AI should more than double revenue to become a USD 12.5 billion industry.

thank you very much for the infarmation very intresting.I got the necessary information in this article. We would like you to post more.

knowledgeable content

Very interesting, Wish to see much more like this. Thanks for sharing your information!

AI is the best answer to the question, “can machines think?” AI enables machines to not only think but also to make good decisions by distinguishing between good and bad.

AI is the best answer to the question, “can machines think?” AI enables machines to not only think but also to make good decisions by distinguishing between good and bad.

Thanks for sharing this awesome article on your blog. Please upload more posts regarding this .

Thanks for such a great article here.

Thanks for sharing this information with us. It is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans.

It’s a very rare case that machine can think.

help to this website such a amazing content sharing

Thanks for posting this valuable content about Artificial Intelligence. Please share more posts regarding AI.

I truly appreciate this post. very good article keep writing it

Thats a very nice post, i say thank you for share your knowledge with us, very kindly.

Hope to reado more nice article around here,

Your follower,

Jack from Roupas de Academia no Brasil. Compre Moda Fitness Loja Moda Fitness com estilo, qualidade e os melhores preços do mercado

These insights are really cool. Thanks for sharing! Check out this copy as well on AI –

Greatly enjoyable as well as interesting info discussed in this message. Undoubtedly appreciated while browsing through it.

Great Articles! Thank you for sharing!

Amazing blog. Thanks for sharing. It will be helpful for me.

It is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans.

Thanks for sharing amazing blog. Automation matters in modern times.

very good and interesting article content, besides that you can access related articles at the following url:

Hi! Thanks for such a detailed research! AI is such a powerful tool especially in retail. Hope you’ll also find interesting this artcle here –

First-year students’ rising obligations and commitments

Great information on your blog. Thank you for sharing with us. It’s nice to find this website.

Artificial Intelligence will simulate human-like thinking. Rather than being a replacement, it is expected to create new jobs to suit the technological advancements. This indicates that AI will hold the fort as a rising technology star in the future. It will impact businesses and lives considerably.


What an informative post!

I am glad my friend shared this page with me. I am still in college and being able to read this rich information on AI makes me feel satisfied. I am also learning about AI and articles like these makes me feel more excited for my career in the future.

This is nice post I like this post Thank you for sharing these things

Whenever I visit this blog, I always get something very useful and in trend. Love to reach such stuff on AI.

Merits for this particular clarifying and fantastic blog page. I am pretty thrilled to find this sort of relevant information on your webpage.

Great Post! Thanks for sharing this awesome article on your blog. Please upload more posts regarding this .

A rich source of information on the history of Artificial Intelligence. It really helps us to understand how far have we got in this field. Nice read, thanks!

it would be great, if there were some examples in the text. It is interesting how such work is carried out when it comes to animals. Maybe any more articles or videos to have a look?))

Beautiful to say history of artificial intelligence thank you for share this

I’ve been reading your posts and I really like it. You can write very well.

Thank you for the useful information which you shared throughout your blog. I appreciate the way you shared the relevant, precious, and perfect information. Your blog provided us with valuable information to work with. Each & every tips of your post are awesome. Thanks a lot for sharing. Keep blogging, I am happy to find this post very useful for me, as it contains lot of information. I always prefer to read the quality content and this thing I found in you post. Thanks for sharing

This is a well-written blog.

Thanks for Share This Article, AI smartphones can create a rich offline profile that includes accurate information about their owner.

Thanks for sharing your thoughts on artificial intelligence. AI has the potential to change our lives in countless ways, especially because it seems like every company is integrating AI into something. There are many different ways to define AI, but for our purposes, we’ll focus on AI as computer programs that read data, learn from it, and draw logical conclusions based on that data.

Nice post. I was checking constantly this blog and I am impressed! Extremely helpful information specially the last part I care for such info a lot. I was seeking this particular information for a very long time. Thank you and good luck.

Great! This was a really wonderful post. Thank you for providing such valuable information.Read more Artificial Intelligence blog-

glad to read your post and increased my knowledge base on AI. please note our top mobile app development service and reach us incase if you are looking for any digital service.

Personally best Intelligence innovation of earlier time – Turing test. Well described article.

Well-written article, AI is currently one of the hottest buzzwords.

It’s hard to believe that artificial intelligence has only been around for a few decades. In that short time, AI has made incredible progress, and today it is being used in a variety of ways, from medical diagnosis to automated customer service. But what does the future hold for AI? Some experts believe that AI will eventually surpass human intelligence, leading to a “singularity” in which machines can take over many of the tasks currently performed by humans. Other experts are more cautious, warning that AI could be used to control and manipulate people rather than help them. What is certain is that AI will continue to evolve as we are making more VR applications, and its impact on our world will only grow in the years to come.

It is understood that machines can think by using artificial intelligence. Machine learning and deep learning are techniques used in AI to make machines think like humans. In this field, we can see computers performing tasks better than a human and it has become an essential part of daily activities.

Yes, It’s future, AI and Crypto

No mention of John Von Neumann? Who do you think wrote the book on how to store data? Manchester Mark 1 was based on his architecture.

I really liked the way how “Artificial Intelligence is Everywhere” elaborated. Artificial intelligence is transforming the world. Nearly everything around us is impacted by AI in some way. On top of that, I’d say AI in mobile app development is booming in the particular context of AI. As it reduces the overall cost of mobile app development. To read the detailed article you can check out:

Great post i must say and thanks for the information. Education is definitely a sticky subject. However, is still among the leading topics of our time. I appreciate your post and look forward to more.

Thanks for sharing the great content

Hey, very nice site. I came across this on Google, and I am stoked that I did. I will definitely be coming back here more often.

Interesting picture, it looks like a brain, similar to Siberia Snus.

Nice blog article , more information was hidden in it ,

Thanks & Regards V.saibabu

Thanks for one marvelous posting! I enjoyed reading it; you are a great author. I will make sure to bookmark your blog and may come back someday. I want to encourage that you continue your great posts, have a nice weekend!

Bunch of cheers for a really good piece of relevant information from a qualified professional blogger with satisfying article.

Businesses use artificial intelligence for a variety of purposes, including data aggregation and job process streamlining. Artificial intelligence (AI) technology is also used by many businesses in an effort to lower operating expenses, boost productivity, boost sales, and enhance customer experience.

Thanks for writing this great article for us. I have gained good stuff from this website. Looking forward to your next article. I am happy to share this post to my friends. Keep it up.

It is a matter of debate whether or not machines can truly “think” in the same way that humans do. Some people believe that machine learning algorithms and artificial intelligence systems are capable of thinking, as they can perform tasks that would be considered intelligent if a human were to do them. For example, some AI systems are able to recognize patterns, make decisions, and even learn from their experiences, much like a human being.

I enjoyed how “Artificial Intelligence is Everywhere” was developed. The world is being transformed by artificial intelligence. AI has an influence on almost everything around us. Furthermore, I believe AI in mobile app development is thriving in the context of AI. Because it lowers the overall cost of developing a mobile app.

Statisticians and data scientists can’t become AI engineers without knowing how to manipulate data and deploy machine learning models. Software engineers can’t become AI engineers without having knowledge of statistics and deep learning. To learn more about data science course visit here

Isaac Asimov was one of science fiction writers imagined Artificial Intelligence and Machine Learning as early as the 1940s. His ideas on AI are expressed through Robots which take on human or humanoid forms (Androids). The seeds of modern AI were planted by early philosophers who explained human thinking as the mechanical manipulation of symbols. Alan Turing, a British mathematician and computer scientist proposed the concept of Artificial Intelligence in the year 1950 with his work titled “Computing Machinery and Intelligence.”.

AI with large data, larger RAM and high computing power makes supercomputing synonymous with AI. Quantum computing and artificial intelligence is ideal marriage to achieve significant progress.  We can accurately diagnosing diseases, improving online learning and making humans potentially obsolete. We may teach AI not to kill by making laws hardwired in AI algorithms or make human, human health and human rights priority #1 while launching any AI application.

However, others argue that true thinking requires consciousness, self-awareness, and other qualities that are uniquely human and that machines do not possess. It is also important to note that there is still much we do not understand about human thought and consciousness, so it is difficult to definitively say whether or not machines can think in the same way that we do. Ultimately, the question of whether or not machines can think is a complex and multifaceted one that depends on how we define “thinking” and what we consider to be the essential qualities of thought.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

Currently you have JavaScript disabled. In order to post comments, please make sure JavaScript and Cookies are enabled, and reload the page. Click here for instructions on how to enable JavaScript in your browser.

Encyclopedia Britannica

Theoretical work

Alan Turing and the beginning of AI

Alan Turing

The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing . In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory , symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that also is stored in the memory in the form of symbols. This is Turing’s stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing’s conception is now known simply as the universal Turing machine . All modern computers are in essence universal Turing machines.

During World War II , Turing was a leading cryptanalyst at the Government Code and Cypher School in Bletchley Park , Buckinghamshire, England. Turing could not turn to the project of building a stored-program electronic computing machine until the cessation of hostilities in Europe in 1945. Nevertheless, during the war he gave considerable thought to the issue of machine intelligence. One of Turing’s colleagues at Bletchley Park, Donald Michie (who later founded the Department of Machine Intelligence and Perception at the University of Edinburgh), later recalled that Turing often discussed how computers could learn from experience as well as solve new problems through the use of guiding principles—a process now known as heuristic problem solving.

Turing gave quite possibly the earliest public lecture (London, 1947) to mention computer intelligence, saying, “What we want is a machine that can learn from experience,” and that the “possibility of letting the machine alter its own instructions provides the mechanism for this.” In 1948 he introduced many of the central concepts of AI in a report entitled “Intelligent Machinery.” However, Turing did not publish this paper, and many of his ideas were later reinvented by others. For instance, one of Turing’s original ideas was to train a network of artificial neurons to perform specific tasks, an approach described in the section Connectionism .

At Bletchley Park, Turing illustrated his ideas on machine intelligence by reference to chess —a useful source of challenging and clearly defined problems against which proposed methods for problem solving could be tested. In principle, a chess-playing computer could play by searching exhaustively through all the available moves, but in practice this is impossible because it would involve examining an astronomically large number of moves. Heuristics are necessary to guide a narrower, more discriminative search. Although Turing experimented with designing chess programs, he had to content himself with theory in the absence of a computer to run his chess program. The first true AI programs had to await the arrival of stored-program electronic digital computers .

position of chessmen at the beginning of a game

In 1945 Turing predicted that computers would one day play very good chess, and just over 50 years later, in 1997, Deep Blue , a chess computer built by the International Business Machines Corporation (IBM) , beat the reigning world champion, Garry Kasparov , in a six-game match. While Turing’s prediction came true, his expectation that chess programming would contribute to the understanding of how human beings think did not. The huge improvement in computer chess since Turing’s day is attributable to advances in computer engineering rather than advances in AI—Deep Blue’s 256 parallel processors enabled it to examine 200 million possible moves per second and to look ahead as many as 14 turns of play. Many agree with Noam Chomsky , a linguist at the Massachusetts Institute of Technology (MIT) , who opined that a computer beating a grandmaster at chess is about as interesting as a bulldozer winning an Olympic weightlifting competition.

How artificial intelligence is transforming the world

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI's application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Darrell West

Darrell M. West

Senior fellow - center for technology innovation, douglas dillon chair in governmental studies.

John R. Allen

John R. Allen

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

I. Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.


Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.


AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.


AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

II. Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

Related Content


How robots, artificial intelligence, and machine learning will affect employment and public policy

Apple Chief Executive Officer Tim Cook (C) attends an event for students to learn to write computer code at the Apple store in the Manhattan borough of New York December 9, 2015.     REUTERS/Carlo Allegri - GF10000260390

Leveraging the disruptive power of artificial intelligence for fairer opportunities

Daniel Goehring demonstrating hands-free driving in Berlin

Work and social policy in the age of artificial intelligence

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.


Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

III. Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

IV. Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

V. Conclusion

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

Report Produced by Center for Technology Innovation

Related Topics

More on Technology & Innovation

Quality control and assembly of SMT printed components on circuit board in QC lab of PCB manufacturing high-tech factory

What state and local leaders need to know about Biden’s semiconductor subsidies

European Commissioner for Values and Transparency Vera Jourova and European Commissioner for Justice Didier Reynders (R) give a news conference on EU rules on data protection (GDPR) and the new EU Strategy on victims' rights, in Brussels, Belgium, June 24, 2020. Olivier Hoslet/Pool via REUTERS

What U.S. policymakers can learn from the European decision on personalized ads

Cover art for USMCA Forward 2023, featuring images of transportation by ground, air, and sea.

USMCA Forward 2023: Building more integrated, resilient, and secure supply chains in North America

first artificial intelligence research paper

Contact Us Click Here

Whatsapp contact click here, published in:.

Volume 9 Issue 5 May-2022 eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

Unique identifier.

Published Paper ID: JETIR2205289

Registration ID: 401920

Page Number


Share This Article

Important links:.

first artificial intelligence research paper

Cite This Article

2349-5162 | Impact Factor 7.95 Calculate by Google Scholar An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Publication Details

Download paper / preview article.

first artificial intelligence research paper

Download Paper

Preview this article, download pdf, print this page.

first artificial intelligence research paper

Impact Factor:

Impact factor calculation click here current call for paper, call for paper cilck here for more info important links:.

first artificial intelligence research paper

Skip to Main Content

IEEE Account

Purchase Details

Profile Information

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions. no longer supports Internet Explorer.

To browse and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

paper cover thumbnail

Research Paper on Artificial Intelligence

Profile image of casestudies journal

1. ABSTRACT: This branch of computer science is concerned with making computers behave like humans. Artificial intelligence includes game playing, expert systems, neural networks, natural language, and robotics. Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. Today, the hottest area of artificial intelligence is neural networks, which are proving successful in a number of disciplines such as voice recognition and natural-language processing. There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog. Artificial intelligence is working a lot in decreasing human effort but with less growth. 2.

Related Papers

Sulaiman Al amro

Keywords: computer security, forensics, neural networks, fuzzy logic, and evolutionary computation.

first artificial intelligence research paper

Data Mining and Knowledge Discovery

Paul Akporido

… in Biomedicine and …

Tomasz Smolinski

Stan Franklin

This chapter is aimed at introducing the reader to field of artificial intelligence (AI) in the context of its history and core themes. After a concise preamble introducing these themes, a brief and highly selective history will be presented. This history will be followed by a succinct introduction to the major research areas within AI. The chapter will continue with a description of currents trends in AI research, and will conclude with a discussion of the current situation with regard to the core themes.

Hussein Zedan

One of the main characteristics of the eLearning systems today is the'anytime-anywhereanyhow'delivery of electronic content, personalized and customized for each individual user. To satisfy this requirement new types of context-aware and adaptive software architectures are needed, which are enabled to sense aspects of the environment and use this information to adapt their behavior in response to changing situation.

David Leiser

Vasant G Honavar

Abstract This chapter reviews common-sense definitions of intelligence; motivates the research in artificial intelligence (AI) that is aimed at design and analysis of programs and computers that model minds/brains; lays out the fundamental guiding hypothesis of AI; reviews the historical development of AI as a scientific and engineering discipline; explores the relationship of AI to other disciplines; and presents an overview of the scope, problems, and major areas of AI.

Jameela Alkrimi

Briefings in Bioinformatics

Roberto Santana

Timothy C Lethbridge

This paper describes the application of inductive methods to data extracted from both source code and software maintenance records. We would like to extract relations that indicate which files in, a legacy system, are relevant to each other in the context of program maintenance. We call these relations maintenance relevance relations. Such a relation could reveal existing complex interconnections among files in the system, which may in turn be useful in comprehending them. We discuss the methodology we employed to extract and evaluate the relations. We also point out some of the problems we encountered and our solutions for them. Finally, we present some of the results that we have obtained

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.


Muhammad Ridho

Knowledge Engineering Review

Michelle Galea


Emre Cevikcan

2008 IEEE International Conference on Fuzzy Systems (IEEE World Congress on Computational Intelligence)

Alireza Sadeghian

International Journal on Artificial Intelligence Tools

Vladan Babovic

Heri Budiman

IEEE Signal Processing Magazine

Michael McTear

casestudies journal

Thiên Nguyễn Bá

Eurasip Book Series on Signal Processing and Communications

Gustavo Olague

Patrice Mubita



Luciano Floridi

Dissertation, Softwarica-Coventry

Özlem Ekici

Challenges for Computational Intelligence

Włodzisław Duch

IEEE Transactions on Knowledge and Data Engineering

Chew Lim Tan

Case Studies Journal ISSN (2305-509X)

Zaki Muhammad Zayyanu

AI Magazine

Ramon Lopez De Mantaras

Computers and Electronics in Agriculture

Steven Thomson

Ruhul Sarker


Andrea Omicini

arXiv: Software Engineering


Rana Umar Draz

Control and Cybernetics

Marek Zaremba

Frontiers in Psychology

Mark Bishop

Case Studies Journal

Wlodzislaw Duch

Data Mining and Computational Intelligence

Andreas Nürnberger

Thomas ("Tom") Nickles


Prof. Dr. rer. nat. habil. Alfred Ultsch


U.S. flag

An official website of the United States government, Department of Justice.

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

A Brief History of Artificial Intelligence

Sidebar to the article Using Artificial Intelligence to Address Criminal Justice Needs , by Christopher Rigano. published in NIJ Journal issue no. 280.

1950: Alan Turing publishes his paper on creating thinking machines. [1]

1956: John McCarthy presents his definition of artificial intelligence. [2]

1956-1974: Reason searches or means-to-end algorithms were first developed to “walk” simple decision paths and make decisions. [3]  Such approaches provided the ability to solve complex mathematical expressions and process strings of words. The word processing is known as natural language processing. These approaches led to the ability to formulate logic and rules to interpret and formulate sentences and also marked the beginning of game theory, which was realized in basic computer games. [4]

1980-1987: Complex systems were developed using logic rules and reasoning algorithms that mimic human experts. This began the rise of expert systems, such as decision support tools that learned the “rules” of a specific knowledge domain like those that a physician would follow when performing a medical diagnosis. [5]  Such systems were capable of complex reasoning but, unlike humans, they could not learn new rules to evolve and expand their decision-making. [6]

1993-2009: Biologically inspired software known as “neural networks” came on the scene. These networks mimic the way living things learn how to identify complex patterns and, in doing so, can complete complex tasks. Character recognition for license plate readers was one of the first applications. [7]

2010-present: Deep learning and big data are now in the limelight. Affordable graphical processing units from the gaming industry have enabled neural networks to be trained using big data. [8]  Layering these networks mimics how humans learn to recognize and categorize simple patterns into complex patterns. This software is being applied in automated facial and object detection and recognition as well as medical image diagnostics, financial patterns, and governance regulations. [9]  Projects such as Life Long Learning Machines, from the Defense Advanced Research Projects Agency, seek to further advance AI algorithms toward learning continuously in ways similar to those of humans. [10]

About This Article

This article was published as part of NIJ Journal issue number 280 , published January 2019, as a sidebar to the article Using Artificial Intelligence to Address Criminal Justice Needs , by Christopher Rigano.

[note 1]  Alan Turing, “Computing Machinery and Intelligence,”  Mind  49 (1950): 433-460.

[note 2]  The Society for the Study of Artificial Intelligence and Simulation of Behaviour, “What is Artificial Intelligence.”

[note 3]  Herbert A. Simon,  The Sciences of the Artificial  (Cambridge, MA: MIT Press, 1981).

[note 4]  Daniel Crevier,  AI: The Tumultuous Search for Artificial Intelligence  (New York: Basic Books, 1993), ISBN 0-465-02997-3.

[note 5]  Ibid.

[note 6]  Pamela McCorduck,  Machines Who Think,  2nd ed. (Natick, MA: A.K. Peters, Ltd., 2004), ISBN 1-56881-205-1, Online Computer Library Center, Inc. 

[note 7]  Navdeep Singh Gill, “Artificial Neural Networks, Neural Networks Applications and Algorithms,”  Xenonstack,  July 21, 2017; Andrew L. Beam, “Deep Learning 101 - Part 1: History and Background” and “Deep Learning 101 - Part 2: Multilayer Perceptrons,”  Machine Learning and Medicine,  February 23, 2017; and Andrej Karpathy, “CS231n: Convolutional Neural Networks for Visual Recognition,” Stanford University Computer Science Class.

[note 8]  Beam, “Deep Learning 101 - Part 1” and “Deep Learning 101 - Part 2.”

[note 9]  Karpathy, “CS231n.”

[note 10]  Defense Advanced Research Projects Agency, “Toward Machines that Improve with Experience,” March 16, 2017.

Cite this Article

Read more about:, related publications.

first artificial intelligence research paper

10 most impressive Research Papers around Artificial Intelligence

first artificial intelligence research paper

Artificial Intelligence research advances are transforming technology as we know it. The AI research community is solving some of the most technology problems related to software and hardware infrastructure, theory and algorithms. Interestingly, the field of AI AI research has drawn acolytes from the non-tech field as well. Case in point — prolific Hollywood actor Kristen Stewart’s highly publicized paper on Artificial Intelligence, originally published at Cornell University library’s open access site .  Stewart co-authored the paper , titled “ Bringing Impressionism to Life with Neural Style Transfer in Come Swim ” with the American poet and literary critic David Shapiro and Adobe Research Engineer Bhautik Joshi .

Essentially, the AI-based paper talks about the style transfer techniques used in her short film Come Swim . However, Stewart’s detractors dismissed it as another “high-level case study.”

Meanwhile, the community is awash with ground-breaking research papers around AI.   Analytics India Magazine lists down the most cited scientific papers around AI, machine intelligence, and computer vision , that will give a perspective on the technology and its applications.

Sign up for your weekly dose of what's up in emerging technology.

Most of these papers have been chosen on the basis of citation value for each. Some of these papers take into account a Highly Influential Citation count (HIC) and Citation Velocity (CV). Citation Velocity is the weighted average number of citations per year over the last 3 years.

first artificial intelligence research paper

A Computational Approach to Edge Detection : Originally published in 1986 and authored by John Canny this paper, on the computational approach to edge detection, has approximately 9724 citations . The success of this approach is defined by a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution.

Download our Mobile App

first artificial intelligence research paper

Besides, the paper also presents a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. This helps in establishing the fact that edge detector performance improves considerably as the operator point spread function is extended along the edge.

A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence : This research paper was co-written by John McCarthy, Marvin L. Minsky, Nathaniel Rochester, Claude E. Shannon, and published in the year 1955. This summer research proposal defined the field, and has another first to its name — it is the first paper to use the term Artificial Intelligence. The proposal invited researchers to the Dartmouth conference , which is widely considered the “birth of AI”.

A Threshold Selection Method from Gray-Level Histograms : The paper was authored by Nobuyuki Otsu and published in 1979 . It has received 7849 paper citations so far. Through this paper, Otsu discusses a nonparametric and unsupervised method of automatic threshold selection for picture segmentation.

The paper delves into how an optimal threshold is selected by the discriminant criterion to maximize the separability of the resultant classes in gray levels. The procedure utilizes only the zeroth- and first-order cumulative moments of the gray-level histogram. The method can be easily applied across multi threshold problems. The paper validates the method by presenting several experimental results.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift : This 2015 article was co-written by Sergey Ioffe and Christian Szegedy . The paper received 946 citations and reflects on a HIC score of 56.

first artificial intelligence research paper

The paper talks about how training deep neural networks is complicated by the fact that the distribution of each layer’s inputs changes during training. This is a result of change in parameters of the previous layers. The phenomenon is termed as internal covariate shift. This issue is addressed by normalizing layer inputs.

Batch normalization achieves the same accuracy with 14 times fewer training steps when applied to a state-of-the-art image classification model. In other words, Batch Normalization beats the original model by a significant margin.

Deep Residual Learning for Image Recognition : The 2016 paper was co-authored by Kaiming He, Xiangyu Zhang, and Shaoqing Ren. The paper has been cited 1436 times, reflecting on a HIC value of 137 and a CV of 582 . The authors have delved into residual learning framework to ease the training of deep neural networks that are substantially deeper than those used previously.

Besides, the research paper explicitly reformulates the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. The research also delves into how comprehensive empirical evidence show that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Distinctive Image Features from Scale-Invariant Keypoints : This article was authored by David G. Lowe in 2004 . The paper received 21528 citations  and explores the method for extracting distinctive invariant features from images. These can be utilized to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination.

The paper additionally delves into an approach which leverages these features for image recognition. This approach can help identify objects among clutter and occlusion while achieving near real-time performance.

Dropout: a simple way to prevent neural networks from overfitting : The 2014 paper was co-authored by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov . The paper has been cited around 2084 times , with a HIC and CV value of 142 and 536 respectively . Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks.

The central premise of the paper is to drop units (along with their connections) from the neural network during training, thus preventing units from co-adapting too much. This helps in significantly reducing overfitting, while furnishing major improvements over other regularization methods.

Induction of decision trees : Authored by J. R. Quinlan , this scientific paper was originally published in 1986 and summarizes an approach to synthesizing decision trees that has been used in a variety of systems. The paper specifically describes one such system, ID3, in detail. Additionally, the paper discusses a reported shortcoming of the basic algorithm , besides comparing the two methods of overcoming it. To conclude the paper, the author presents illustrations of current research directions.

first artificial intelligence research paper

Large-Scale Video Classification with Convolutional Neural Networks : This 2014 paper was co-written by 6 authors, Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. The paper has been cited over 865 times , and reflects on a HIC score of 24 , and a CV of 239 .

Convolutional Neural Networks (CNNs) are proven to stand as a powerful class of models for image recognition problems. These results encouraged the authors to provide an extensive empirical evaluation of CNNs on large-scale video classification. This was accomplished using a new dataset of 1 million YouTube videos belonging to 487 classes.

Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference : The paper was published in 1988 . Judea Pearl is the author to this article. The paper presents a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty.

Pearl furnishes a provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism , truth maintenance systems, and nonmonotonic logic.

More Great AIM Stories

Is eleutherai closely following openai’s route, hugging face makes openai’s worst nightmare come true, data fear looms as india embraces chatgpt, open-source movement in india gets hardware update, unboxing llms, how confidential computing is changing the ai chip game, why an indian equivalent of openai is unlikely for now.

Amit Paul Chowdhury

Our Upcoming Conferences

16-17th Mar, 2023 | Bangalore Rising 2023 | Women in Tech Conference

27-28th Apr, 2023 I Bangalore Data Engineering Summit (DES) 2023 27-28th Apr, 2023

23 Jun, 2023 | Bangalore MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group.

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox, aim top stories.

first artificial intelligence research paper

A guide to feature engineering in time series with Tsfresh

In time series modelling, feature engineering works in a different way because it is sequential data and it gets formed using the changes in any values according to the time

first artificial intelligence research paper

How to visualise different ML models using PyCaret for optimization?

In machine learning, optimization of the results produced by models plays an important role in obtaining better results. We normally get these results in tabular form and optimizing models using such tabular results makes the procedure complex and time-consuming. Visualizing results in a good manner is very helpful in model optimization.

Babel Fish

Digital Babel Fish: The holy grail of Conversational AI

Universal Speech Translator was a dominant theme in the Meta’s Inside the Lab event on February 23.

first artificial intelligence research paper

Why is AI pioneer Yoshua Bengio rooting for GFlowNets?

Developed in 2021, GFlowNets are a novel generative method for unnormalised probability distributions.

first artificial intelligence research paper

This 20-year-old made an AI model for the speech impaired and went viral

Priyanjali Gupta built an AI model that turns sign language into English in real-time and went viral with it on LinkedIn.

first artificial intelligence research paper

6 AI research papers you can’t afford to miss

ImageNet is a dataset of over 15 million labelled high-resolution images across 22,000 categories.

first artificial intelligence research paper

What does Microsoft want to achieve with Singularity?

While opportunistically using spare capacity, Singularity simultaneously provides isolation by respecting job-level SLAs.

first artificial intelligence research paper

Full-time data science courses vs online certifications: What’s best for you?

The online certificates are like floors built on top of the foundation but they can’t be the foundation.

first artificial intelligence research paper

Meta’s machine translation journey

Meta has been devoted to bringing innovations in machine translations for quite some time now.

first artificial intelligence research paper

Cybersecurity awareness increasing among Indian firms, says Raja Ukil of ColorTokens

The advent of 5G and adoption of IoT devices will cause the threat landscape to grow hundred folds.

Our mission is to bring about better-informed and more conscious decisions about technology through authoritative, influential, and trustworthy journalism.

Shape the future of tech.

© Analytics India Magazine Pvt Ltd & AIM Media House LLC 2023

The Rising 2023: 5th edition of India's biggest gathering of Women in Tech to be held in Bengaluru on March 16 and 17th

Black and blue background

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004  paper  (PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, " Computing Machinery and Intelligence " (PDF, 89.8 KB) (link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?"  From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish,  Artificial Intelligence: A Modern Approach  (link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Ideal approach:

Alan Turing’s definition would have fallen under the category of “systems that act like humans.”

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in  Gartner’s hype cycle  (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow “a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovation’s relevance and role in a market or domain.” As Lex Fridman notes  here  (01:08:05) (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.  

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around  AI ethics , read more  here .

Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in  2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, it’s worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Deep learning is actually comprised of neural networks. “Deep” in deep learning refers to a neural network comprised of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. This is generally represented using the following diagram:

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

Related solutions

Artificial intelligence (ai) solutions.

Put IBM Watson to work at scale in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

AI services

Create intelligent workflows that utilize AI, data and analytics, and turn AI aspirations into tangible business outcomes.

AI for cybersecurity

AI is changing the game for cybersecurity, analyzing massive quantities of risk data to speed response times and augment under-resourced security operations.

Discover fresh insights into the opportunities, challenges and lessons learned from infusing AI into businesses.

Access our full catalog of over 100 online courses by purchasing an individual or multi-user digital learning subscription today allowing you to expand your skills across a range of our products at one low price.

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

How AI technology can tame the scientific literature

Andy Extance is a freelance writer based in Exeter, UK.

You can also search for this author in PubMed   Google Scholar

Illustration by The Project Twins

You have full access to this article via your institution.

When computer scientist Christian Berger’s team sought to get its project about self-driving vehicle algorithms on the road, it faced a daunting obstacle. The scientists, at the University of Gothenburg in Sweden, found an overwhelming number of papers on the topic — more than 10,000 — in a systematic literature review. Investigating them properly would have taken a year, Berger says.

Luckily, they had help: a literature-exploration tool powered by artificial intelligence (AI), called Using a 300-to-500-word description of a researcher’s problem, or the URL of an existing paper, the Berlin-based service returns a map of thousands of matching documents, visually grouped by topic. The results, Berger says, provide “a quick and nevertheless precise overview of what should be relevant to a certain research question”. is among a bevy of new AI-based search tools offering targeted navigation of the knowledge landscape. Such tools include the popular Semantic Scholar, developed by the Allen Institute for Artificial Intelligence in Seattle, Washington, and Microsoft Academic. Although each tool serves a specific niche, they all provide scientists with a different look at the scientific literature than do conventional tools such as PubMed and Google Scholar. Many are helping researchers to validate existing scientific hypotheses. And some, by revealing hidden connections between findings, can even suggest new hypotheses for guiding experiments.

Such tools provide “state-of-the-art information retrieval”, says Giovanni Colavizza, a research data scientist at the Alan Turing Institute in London, who studies full-text analysis of scholarly publications. Whereas conventional tools act largely as citation indices, AI-based ones can offer a more penetrating view of the literature, Colavizza says.

That said, these tools are often expensive, and limited by the fraction of the scientific literature they search. “They are not meant to give you an exhaustive search,” says Suzanne Fricke, an animal-health librarian at Washington State University in Pullman, who has written a resource review on Semantic Scholar ( S. Fricke J. Med. Lib. Assoc. 106 , 145–147; 2018 ). Some, for example, “are meant to get you quickly caught up on a topic, which is why they should be used in conjunction with other tools”. Berger echoes this sentiment: “Blindly using any research engine doesn’t answer every question automatically.”

Teaching science to machines

AI-based ‘speed-readers’ are useful because the scientific literature is so vast. By one estimate, new papers are published worldwide at a rate of 1 million each year — that’s one every 30 seconds. It is practically impossible for researchers to keep up, even in their own narrow disciplines. So, some seek to computationally tame the flood.

The algorithms powering such tools typically perform two functions — they extract scientific content and provide advanced services, such as filtering, ranking and grouping search results. Algorithms extracting scientific content often exploit natural language processing (NLP) techniques, which seek to interpret language as humans use it, Colavizza explains. Developers can use supervised machine learning, for example — which involves ‘tagging’ entities, such as a paper’s authors and references, in training sets to teach algorithms to identify and extract them.

To provide more-advanced services, algorithms often construct ‘knowledge graphs’ that detail relationships between the extracted entities and show them to users. For example, the AI could suggest that a drug and a protein are related if they’re mentioned in the same sentence. “The knowledge graph encodes this as an explicit relationship in a database, and not just in a sentence on a document, essentially making it machine readable,” Colavizza says. takes a different approach, Colavizza notes, grouping documents into topics defined by the words they use. trawls the CORE collection, a searchable database of more than 134 million open-access papers, as well as journals to which the user’s library provides access. The tool blends three algorithms to create ‘document fingerprints’ that reflect word-usage frequencies, which are then used to rank papers according to relevance, says chief technology officer Viktor Botev.

The result is a map of related papers, but eventually the company plans to supplement those results by identifying hypotheses explored in each paper as well. It is also developing a parallel, blockchain-based effort called Project Aiur, which seeks to use AI to check every aspect of a research paper against other scientific documents, thus validating hypotheses.

Colavizza says that tools such as — free for basic queries, but costing upwards of €20,000 (US$23,000) a year for premium access, which allows more-nuanced searches — can accelerate researchers’ entry into new fields. “It facilitates initial exploration of the literature in a domain in which I’m marginally familiar,” he says.

Experts seeking deeper insights into their own specialities might consider free AI-powered tools such as Microsoft Academic or Semantic Scholar, Colavizza suggests. Another similar option is Dimensions, a tool whose basic use is free but which costs to search and analyse grant and patent data, as well as to access data using the programmable Dimensions Search Language. (Dimensions is created by technology firm Digital Science, operated by the Holtzbrinck Publishing Group, which also has a majority share in Nature ’s publisher.)

Semantic Scholar has a browser-based search bar that closely mimics engines such as Google. But it gives more information than Google Scholar to help experts to prioritize results, Colavizza says. That includes popularity metrics, topics such as data sets and methods, and the exact excerpt in which text is cited. “I was very surprised to find that they also capture indirect citations,” Colavizza adds — such as when a method or idea is so well established that researchers don’t refer to its origin.

Doug Raymond, Semantic Scholar’s general manager, says that one million people use the service each month. Semantic Scholar uses NLP to extract information while simultaneously building connections to determine whether information is relevant and reputable, Raymond says. It can identify non-obvious connections, such as methodologies in computer science that are relevant to computational biology, he adds, and it can help to identify unsolved problems or important hypotheses to validate or disprove. Currently, Semantic Scholar incorporates more than 40 million documents from computer and biomedical science, and its corpus is growing, says Raymond. “Ultimately, we’d like to incorporate all academic knowledge.”

For other tools, such as SourceData from the European Molecular Biology Organization (EMBO) in Heidelberg, Germany, experimental data are a more central concern. As chief editor of Molecular Systems Biology , an EMBO publication, Thomas Lemberger wants to make the data underlying figures easier to find and interrogate. SourceData therefore delves into figures and their captions to list biological objects involved in an experiment, such as small molecules, genes or organisms. It then allows researchers to query those relationships, identifying papers that address the question. For instance, searching, ‘Does insulin affect glucose?’ retrieves ten papers in which the “influence of insulin (molecule) on glucose (molecule) is measured”.

SourceData is at an early stage, Lemberger says, having generated a knowledge graph comprising 20,000 experiments that were manually curated during the editing process for roughly 1,000 articles. The online tool is currently limited to querying this data set, but Lemberger and his colleagues are training machine-learning algorithms on it. The SourceData team is also working on a modified neuroscience-focused version of the tool with an interdisciplinary neuroscience consortium led by neurobiologist Matthew Larkum at Humboldt University in Berlin. Elsewhere, IBM Watson Health in Cambridge, Massachusetts, announced in August that it will combine its AI with genomics data from Springer Nature to help oncologists to define treatments. ( Nature ’s news team is editorially independent of its publisher.)

Hypothetically useful

Among those embarking on hypothesis generation are the roughly 20 customers of Euretos, based in Utrecht, the Netherlands. Arie Baak, who co-founded Euretos, explains that the company sells tools to industry and academia, mainly for biomarker and drug-target discovery and validation, for prices he did not disclose.

Euretos uses NLP to interpret research papers, but this is secondary to the 200-plus biomedical-data repositories it integrates. To understand them, the tool relies on the many ‘ontologies’ — that is, structured keyword lists — that life scientists have created to define and connect concepts in their subject areas.

Baak demonstrates by searching for a signalling protein called CXCL13. Above the resulting publication list are categories such as ‘metabolites’ or ‘diseases’. The screen looks much like Google Scholar or PubMed at this stage, with an ordered list of results. But clicking on a category reveals extra dimensions. Selecting ‘genes’, for instance, pulls up a list of the genes associated with CXCL13, ranked by how many publications mention them; another click brings up diagrams illustrating connections between CXCL13 and other genes.

Researchers at the Leiden University Medical Centre (LUMC) in the Netherlands have shown that this approach can yield new hypotheses, identifying candidate diseases that existing drugs might treat. The team presented its results at the Semantic Web Applications and Tools for Health Care and Life Sciences meeting in Rome in December 2017. They have also used Euretos to identify gene-expression changes in a neurological disorder called spinocerebellar ataxia type 3 ( L. Toonen et al. Mol. Neurodegener. 13 , 31; 2018 ).

So, should researchers worry that AI-based hypothesis generation could put them out of a job? Not according to Colavizza. Hypothesis generation is a “very challenging ambition”, he says, and improvements initially will be incremental. The hypotheses suggested so far are therefore “mostly in the realm of the relatively unsurprising ones”, Colavizza says.

That will probably change, of course. But surprising or not, computer-generated hypotheses must still be tested. And that requires human researchers. “One should never believe an auto-generated hypothesis first-hand without investigating the underlying evidence,” warns LUMC researcher Kristina Hettne. “Even though these tools can assist in collecting the known evidence, experimental validation is a must.”

Nature 561 , 273-274 (2018)


Updates & Corrections

Correction 05 October 2018 : An earlier version of this Toolbox referred to the CORE repository by its old name, Connecting Repositories.

Related Articles

first artificial intelligence research paper

Quick uptake of ChatGPT, and more — this week’s best science graphics

Quick uptake of ChatGPT, and more — this week’s best science graphics

News 28 FEB 23

Why artificial intelligence needs to understand consequences

Outlook 24 FEB 23

AI writing tools could hand scientists the ‘gift of time’

AI writing tools could hand scientists the ‘gift of time’

Career Column 22 FEB 23

AI chatbots are coming to search engines — can you trust the results?

AI chatbots are coming to search engines — can you trust the results?

News Explainer 13 FEB 23

Maths predicts World Cup winner — and more of this week’s best science graphics

Maths predicts World Cup winner — and more of this week’s best science graphics

News 22 NOV 22

Hunting for the best bioscience software tool? Check this database

Hunting for the best bioscience software tool? Check this database

Technology Feature 12 JAN 23

Should I join Mastodon? A scientists’ guide to Twitter’s rival

Should I join Mastodon? A scientists’ guide to Twitter’s rival

News Explainer 10 NOV 22

Nigeria’s energy policy needs state-of-the-art modelling tools

Correspondence 27 SEP 22

Research Scientist - Chemistry Research & Innovation

MRC National Institute for Medical Research

Harwell Campus, Oxfordshire, United Kingdom

POST-DOC POSITIONS IN THE FIELD OF “Automated Miniaturized Chemistry” supervised by Prof. Alexander Dömling

Palacky University (PU)

Olomouc, Czech Republic

Ph.D. POSITIONS IN THE FIELD OF “Automated miniaturized chemistry” supervised by Prof. Alexander Dömling

Czech advanced technology and research institute opens a senior researcher position in the field of “automated miniaturized chemistry” supervised by prof. alexander dömling.

first artificial intelligence research paper

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

> cs > cs.AI

Help | Advanced Search

Artificial Intelligence

Authors and titles for recent submissions.

Fri, 3 Mar 2023 (showing first 25 of 78 entries)

Links to: arXiv , form interface , find , cs , new , 2303 , contact , h elp   ( Access key information)

first artificial intelligence research paper

Towards Data Science

Sergei Ivanov

Mar 8, 2021


Top-10 Research Papers in AI

The most cited ai works that influence our daily life today.

Each year scientists from around the world publish thousands of research papers in AI but only a few of them reach wide audiences and make a global impact in the world. Below are the top-10 most impactful research papers published in top AI conferences during the last 5 years. The ranking is based on the number of citations and includes major AI conferences and journals.

Explaining and Harnessing Adversarial Examples , Goodfellow et al. , ICLR 2015, cited by 6995

What? One of the first fast ways to generate adversarial examples for neural networks and introduction of adversarial training as a regularization technique.

Impact: Exposed an interesting phenomenon where performance of any accurate machine learning model can be significantly reduced by an attacker applying a tiny modification to the input. This phenomenon has been observed in other tasks and modalities (e.g. text and video) and has led to a vast research work that strives to rethink the applicability of ML to real-world critical tasks.

Semi-Supervised Classification with Graph Convolutional Networks , Kipf and Welling , ICLR 2017, cited by 7021

What? A simple but effective graph neural network that performs extremely well on the semi-supervised node classification task.

Impact: Discoveries of new drugs or efficient energy storage catalysts require modeling molecules as graphs. Graph convolutional networks brought the toolkit of deep learning into the graph domain, showing its superiority to hand-crafted heuristics that dominated the field before.

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , Radford et al., ICLR 2016, cited by 8681

What? Proposed DCGAN, a deep CNN architecture for the generator of the GAN model to obtain natural images that never existed before.

Impact: GANs are machine learning models capable to generate new images of people, animals, or objects, and as such are responsible for the creativity of machines that gain popularity in photo-editing and designer apps. The proposed approach is now fundamental for all modern GAN models that generate new realistic images.

Mastering the game of Go with deep neural networks and tree search , Silver et al. , Nature 2016, cited by 9621

What? Introduction of AlphaGo, a combination of deep reinforcement learning with Monte-Carlo tree search algorithm that beats other programs and professional human players in Go game.

Impact: The first time in history, a computer program won one of the strongest human players, Lee Sedol — a major milestone for AI that was deemed impossible for at least another decade.

Human-level control through deep reinforcement learning , Mnih et al. , Nature 2015, cited by 13615

What? Introduction of reinforcement learning algorithm DQN that achieves human-level performance on many Atari games.

Impact: The algorithms behind manufacturing, robotics, and logistics have moved from hard-coded rules to reinforcement learning models. DQN is one of the most popular deep reinforcement learning algorithms, which showed superior performance in various applications, without incorporating manually engineered strategies into itself.

Neural Machine Translation by Jointly Learning to Align and Translate , Bahdanau et al., ICLR 2015, cited by 16866

What? The first time neural networks use attention mechanism for machine translation. Attention is a way for a model to focus only on particular words in a source sentence rather than the whole sentence.

Impact: In machine translation, traditional models such as RNN attempt to squash all of the information about the source sentence into a single vector. A realization that a model can efficiently represent each word as a vector and then attend to each was a big paradigm shift for how neural networks are built, not only in NLP but in all other areas of ML.

Attention is all you need , Vaswani et al. , NeurIPS 2017, cited by 18178

What? An effective neural network, Transformer, which is solely based on attention mechanism, achieving excellent performance in machine translation.

Impact: De-facto, multi-head attention introduced in Transformer model is the most popular deep learning block, being part of another popular language model, BERT. It replaced RNNs and CNNs as a default model for many applications that deal with text and images.

Faster R-CNN: towards real-time object detection with region proposal networks , Ren et al. , NeurIPS 2015, cited by 19915

What? Efficient end-to-end convolutional neural network for object detection in images and videos.

Impact: Faster R-CNN is responsible for the boom of CV applications in industrial settings. Its use in security cameras, self-driving cars, and mobile apps greatly influences how we perceive machines today.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , Ioffe and Szegedy , ICML 2015, cited by 25297

What? A simple method to make neural networks train faster and with more stability, via the normalization of the input features.

Impact: One of the most popular tricks added to most architectures of modern neural networks. The presence of batch norm is one of the reasons deep neural networks achieve state-of-the-art results these days.

Adam: A Method for Stochastic Optimization , Kingma and Ba , ICLR 2015, cited by 67514

What? A popular variant of the stochastic gradient descent optimization algorithm, Adam, that provides fast convergence of neural networks.

Impact: Adam has been adopted as a default method of optimization algorithm for all those millions of neural networks that people train nowadays.

Acknowledgments: This article has been written with the help of Ekaterina Vorobyeva, Evgeniya Ustinova, Elvis Dohmatob, Sergey Kolesnikov, Valentin Malykh. Thank you!

P.S. If you like this story, consider following my telegram channel , twitter , and newsletter .

More from Towards Data Science

Your home for data science. A Medium publication sharing concepts, ideas and codes.

About Help Terms Privacy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store

Sergei Ivanov

Machine Learning research scientist with a focus on Graph Machine Learning and recommendations.

Text to speech

Apple has published its first AI research paper

Apple has stayed true to its promise and published its first academic paper on artificial intelligence .

The world's most valuable company has traditionally kept its AI research private but earlier this month Ruslan Salakhutdinov, director of AI research at Apple, made a pledge to start being more open.

The new Apple paper — published December 22 and titled " Learning from simulated and unsupervised images through adversarial training " — gives an insight into some of the techniques that Apple is using to develop AI.

In the study, which was published through the Cornell University Library, Apple researchers explain a technique that can be used to improve how an algorithm learns to "see" what is in an image.

The paper's six authors state that using synthetic images (such as those seen in a video game), as opposed to real-world images, can be more efficient when it comes to training AI models known as neural networks, which are designed to think in the same way as the human brain. Why? Because synthetic image data is already labelled and annotated while real-world images aren't.

However, using synthetic images has its problems. The Apple researchers write that they are "often not realistic enough, leading the network to learn details only present in synthetic images" and adding that they "fail to generalise well on real images."

In order to get around this issue, the researchers propose using a technique they call "Simulated+Unsupervised learning," which combines unlabelled real image data with annotated synthetic images.

The paper's lead author was Apple researcher Ashish Shrivastava. Other authors include Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russ Webb.

Apple's software can already identify people's faces in photos but the company and its rivals now stand to benefit by programming machines to learn how to identify places, animals, brands, and other things.

Apple's paper comes after Facebook's head of AI Yann LeCun said that Apple's reluctance to let researchers publish their work could be hindering its hiring efforts in the highly competitive field where Google, Amazon, and DeepMind are also looking to recruit the best talent.

Describing how Facebook gets the most talented software engineers in the world to come and work on Facebook's AI efforts, LeCun said: "Offering researchers the possibility of doing open research, which is publishing their work.

"In fact, at FAIR [Facebook Artificial Intelligence Research], it’s not just a possibility, it’s a requirement," he said in London. "So, [when] you’re a researcher, you assume that you’re going to publish your work. It’s very important for a scientist because the currency of the career as a scientist is the intellectual impact. So you can’t tell people 'come work for us but you can’t tell people what you’re doing' because you basically ruin their career. That’s a big element."

Jack Clark, who writes the Import AI newsletter, wrote in his latest email: "Apple’s participation in the AI community will help it hire more AI researchers, while benefiting the broader AI community."

It's likely that majority of Apple's AI research takes place at its headquarters in Cupertino but the iPhone maker also has a number of satellite AI outposts around the world, including a secret Siri lab in Cambridge, UK.

Apple did not immediately respond to Business Insider's request for comment.

first artificial intelligence research paper

NOW WATCH: Watch Tim Cook unveil Apple's news-subscription service: Apple News+

first artificial intelligence research paper

Go to journal home page - Artificial Intelligence

Artificial Intelligence

About the journal

An International Journal

Aims & Scope

The journal of Artificial Intelligence (AIJ) welcomes papers on broad aspects of AI that constitute advances in the overall field including, but not limited to, cognition and AI, automated reasoning and inference, case-based reasoning, commonsense reasoning, computer vision, constraint …


Sylvie thiebaux, phd.

Australian National University, Canberra, Australia

Michael Wooldridge

University of Oxford, Department of Computing Science, Oxford, United Kingdom

Latest published

Articles in press, most downloaded, most popular, designing menus of contracts efficiently: the power of randomization, post-trained convolution networks for single image super-resolution, on measuring inconsistency in definite and indefinite databases with denial constraints, width-based search for multi agent privacy-preserving planning, task-guided irl in pomdps that scales, natural language watermarking via paraphraser-based lexical substitution, chaos game representation for authorship attribution, divgan: a diversity enforcing generative adversarial network for mode collapse reduction, more from artificial intelligence, announcements, meet the new editor-in-chief of the artificial intelligence journal, guidelines for submitting proposals for journal special issues, getting more electric cars on the road, calls for papers, open-world ai, special issues and article collections, risk-aware autonomous systems: theory and practice, epistemic planning, ethics for autonomous systems, explainable artificial intelligence.

Copyright © 2023 Elsevier B.V. All rights reserved

Subscribe to the PwC Newsletter

Join the community, edit social preview.

first artificial intelligence research paper

Add a new code entry for this paper

Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row, remove a task, add a method, remove a method, edit datasets, artificial intelligence system for detection and screening of cardiac abnormalities using electrocardiogram images.

10 Feb 2023  ·  Deyun Zhang , Shijia Geng , Yang Zhou , Weilun Xu , Guodong Wei , Kai Wang , Jie Yu , Qiang Zhu , Yongkui Li , Yonghong Zhao , Xingyue Chen , Rui Zhang , Zhaoji Fu , Rongbo Zhou , Yanqi E , Sumei Fan , Qinghao Zhao , Chuandong Cheng , Nan Peng , Liang Zhang , Linlin Zheng , Jianjun Chu , Hongbin Xu , Chen Tan , Jian Liu , Huayue Tao , Tong Liu , Kangyin Chen , Chenyang Jiang , Xingpeng Liu , Shenda Hong · Edit social preview

The artificial intelligence (AI) system has achieved expert-level performance in electrocardiogram (ECG) signal analysis. However, in underdeveloped countries or regions where the healthcare information system is imperfect, only paper ECGs can be provided. Analysis of real-world ECG images (photos or scans of paper ECGs) remains challenging due to complex environments or interference. In this study, we present an AI system developed to detect and screen cardiac abnormalities (CAs) from real-world ECG images. The system was evaluated on a large dataset of 52,357 patients from multiple regions and populations across the world. On the detection task, the AI system obtained area under the receiver operating curve (AUC) of 0.996 (hold-out test), 0.994 (external test 1), 0.984 (external test 2), and 0.979 (external test 3), respectively. Meanwhile, the detection results of AI system showed a strong correlation with the diagnosis of cardiologists (cardiologist 1 (R=0.794, p<1e-3), cardiologist 2 (R=0.812, p<1e-3)). On the screening task, the AI system achieved AUCs of 0.894 (hold-out test) and 0.850 (external test). The screening performance of the AI system was better than that of the cardiologists (AI system (0.846) vs. cardiologist 1 (0.520) vs. cardiologist 2 (0.480)). Our study demonstrates the feasibility of an accurate, objective, easy-to-use, fast, and low-cost AI system for CA detection and screening. The system has the potential to be used by healthcare professionals, caregivers, and general users to assess CAs based on real-world ECG images.

Code Edit Add Remove Mark official

Tasks edit add remove, datasets edit.

first artificial intelligence research paper

Results from the Paper Edit

Methods edit add remove.

Russian Harmful Foreign Activities Sanctions

by the Foreign Assets Control Office on 03/03/2023

first artificial intelligence research paper

Dodd-Frank Wall Street Reform

234 documents in the last year

Government Contracts

36 documents in the last year

Stock & Commodities Trading

467 documents in the last year

Economic Sanctions & Foreign Assets Control

822 documents in the last year

Fisheries of the Northeastern United States

by the National Oceanic and Atmospheric Administration on 03/03/2023

first artificial intelligence research paper

Endangered & Threatened Species

Fishery management.

1465 documents in the last year

Taking of Marine Mammals

282 documents in the last year

Parks & Recreation

981 documents in the last year

Safety Zones in Reentry Sites

by the Coast Guard on 03/03/2023

first artificial intelligence research paper

Immigration & Border Control

266 documents in the last year

Cultural Objects Imported for Exhibition

83 documents in the last year

International Trade (Anti-Dumping)

853 documents in the last year

Controlled Exports (CCL & USML)

86 documents in the last year

Energy Conservation Program

by the Energy Department on 03/03/2023

first artificial intelligence research paper

Broadband Policy

207 documents in the last year

Patent, Trademark, and Copyright

1411 documents in the last year

Energy Efficiency & Renewable Resources

513 documents in the last year

Climate Change

663 documents in the last year

Nuclear Power Plant Operating Licenses

by the Nuclear Regulatory Commission on 03/03/2023

first artificial intelligence research paper

Automobile Safety & Fuel Economy

43 documents in the last year

Oil and Gas Leasing

20 documents in the last year

122 documents in the last year

Trade Adjustment Assistance

35 documents in the last year

American Red Cross

by the Executive Office of the President on 03/03/2023

first artificial intelligence research paper

Health Care Reform

159 documents in the last year

Veterans Educational Benefits

11 documents in the last year

Veterans Employment & Training

26 documents in the last year

Disaster Declarations & Assistance

940 documents in the last year

Explore Agencies

Explore topics (cfr indexing terms), current issue 391 pages.

Go to a specific date

The Public Inspection page on offers a preview of documents scheduled to appear in the next day's Federal Register issue. The Public Inspection page may also include documents scheduled for later issues, at the request of the issuing agency.

Special Filing

Regular filing, executive orders view.

The President of the United States manages the operations of the Executive branch of Government through Executive orders.

Proclamations view

The President of the United States communicates information on holidays, commemorations, special observances, trade, and policy through Proclamations.

Other Presidential Documents view

The President of the United States issues other types of documents, including but not limited to; memoranda, notices, determinations, letters, messages, and orders.

The Federal Register

The daily journal of the united states government.

This site displays a prototype of a “Web 2.0” version of the daily Federal Register. It is not an official legal edition of the Federal Register, and does not replace the official print version or the official electronic version on GPO’s

The documents posted on this site are XML renditions of published Federal Register documents. Each document posted on the site includes a link to the corresponding official PDF file on This prototype edition of the daily Federal Register on will remain an unofficial informational resource until the Administrative Committee of the Federal Register (ACFR) issues a regulation granting it official legal status. For complete information about, and access to, our official publications and services, go to About the Federal Register on NARA's

The OFR/GPO partnership is committed to presenting accurate and reliable regulatory information on with the objective of establishing the XML-based Federal Register as an ACFR-sanctioned publication in the future. While every effort has been made to ensure that the material on is accurately displayed, consistent with the official SGML-based PDF version on, those relying on it for legal research should verify their results against an official edition of the Federal Register. Until the ACFR grants it official status, the XML rendition of the daily Federal Register on does not provide legal notice to the public or judicial notice to the courts.

Discussion Paper: Artificial Intelligence in Drug Manufacturing, Notice; Request for Information and Comments

A Notice by the Food and Drug Administration on 03/01/2023

This document has a comment period that ends in 58 days. (05/01/2023) Submit a formal comment

Thank you for taking the time to create a comment. Your input is important.

Once you have filled in the required fields below you can preview and/or submit your comment to the Health and Human Services Department for review. All comments are considered public and will be posted online once the Health and Human Services Department has reviewed them.

You can view alternative ways to comment or you may also comment via at /documents/2023/03/01/2023-04206/discussion-paper-artificial-intelligence-in-drug-manufacturing-notice-request-for-information-and .

Note: You can attach your comment as a file and/or attach supporting documents to your comment. Attachment Requirements .

this will NOT be posted on

Document Details

Information about this document as published in the Federal Register .

Document Statistics

Enhanced content.

Relevant information about this document from provides additional context. This information is not part of the official Federal Register document. Logo

Published Document

This document has been published in the Federal Register . Use the PDF linked in the document sidebar for the official electronic format.

Enhanced Content - Table of Contents

This table of contents is a navigational tool, processed from the headings within the legal text of Federal Register documents. This repetition of headings to form internal navigation links has no substantive legal effect.

Electronic Submissions

Written/paper submissions, for further information contact:, supplementary information:, i. background, ii. requested information and comments, enhanced content - submit public comment.

Enhanced Content - Read Public Comments

Enhanced Content - Sharing

Enhanced Content - Document Print View

Enhanced Content - Document Tools

These tools are designed to help you understand the official document better and aid in comparing the online edition to the print edition.

These markup elements allow the user to see how the document follows the Document Drafting Handbook that agencies use to create their documents. These can be useful for better understanding how a document is structured but are not part of the published document itself.

Enhanced Content - Developer Tools

This document is available in the following developer friendly formats:.

More information and documentation can be found in our developer tools pages .

Official Content

This PDF is the current document as it appeared on Public Inspection on 02/28/2023 at 8:45 am. It was viewed 81 times while on Public Inspection.

If you are using public inspection listings for legal research, you should verify the contents of the documents against a final, official edition of the Federal Register. Only official editions of the Federal Register provide legal notice to the public and judicial notice to the courts under 44 U.S.C. 1503 & 1507 . Learn more here .

Food and Drug Administration, HHS.

Notice; establishment of a public docket; request for information and comments.

The Food and Drug Administration (FDA or Agency) is announcing publication of a discussion paper providing information for stakeholders and soliciting public comments on a specific area of emerging and advanced manufacturing technologies. The discussion paper presents areas for consideration and policy development identified by the Center for Drug Evaluation and Research (CDER) scientific and policy experts associated with application of artificial intelligence (AI) to pharmaceutical manufacturing. The discussion paper includes a series of questions to stimulate feedback from the public, including CDER and the Center for Biologics Evaluation and Research (CBER) stakeholders.

Submit either written or electronic comments and information by May 1, 2023.

You may submit comments as follows. Please note that late, untimely filed comments will not be considered. The electronic filing system will accept comments until 11:59 p.m. Eastern Time at the end of May 1, 2023. Comments received by mail/hand delivery/courier (for written/paper submissions) will be considered timely if they are received on or before that date.

Submit electronic comments in the following way:

• Federal eRulemaking Portal: . Follow the instructions for submitting comments. Comments submitted electronically, including attachments, to will be posted to the docket unchanged. Because your comment will be made public, you are solely responsible for ensuring that your comment does not include any confidential information that you or a third party may not wish to be posted, such as medical information, your or anyone else's Social Security number, or confidential business information, such as a manufacturing process. Please note that if you include your name, contact information, or other information that identifies you in the body of your Start Printed Page 12944 comments, that information will be posted on .

Submit written/paper submissions as follows:

• Mail/Hand Delivery/Courier (for written/paper submissions): Dockets Management Staff (HFA-305), Food and Drug Administration, 5630 Fishers Lane, Rm. 1061, Rockville, MD 20852.

Instructions: All submissions received must include the Docket No. FDA-2023-N-0487 for “Discussion Paper: Artificial Intelligence in Drug Manufacturing, Notice; Request for Information and Comments.” Received comments, those filed in a timely manner (see ADDRESSES ), will be placed in the docket and, except for those submitted as “Confidential Submissions,” publicly viewable at or at the Dockets Management Staff between 9 a.m. and 4 p.m., Monday through Friday, 240-402-7500.

• Confidential Submissions: To submit a comment with confidential information that you do not wish to be made publicly available, submit your comments only as a written/paper submission. You should submit two copies total. One copy will include the information you claim to be confidential with a heading or cover note that states “THIS DOCUMENT CONTAINS CONFIDENTIAL INFORMATION.” The Agency will review this copy, including the claimed confidential information, in its consideration of comments. The second copy, which will have the claimed confidential information redacted/blacked out, will be available for public viewing and posted on . Submit both copies to the Dockets Management Staff. If you do not wish your name and contact information to be made publicly available, you can provide this information on the cover sheet and not in the body of your comments and you must identify this information as “confidential.” Any information marked as “confidential” will not be disclosed except in accordance with 21 CFR 10.20 and other applicable disclosure law. For more information about FDA's posting of comments to public dockets, see 80 FR 56469 , September 18, 2015, or access the information at:​content/​pkg/​FR-2015-09-18/​pdf/​2015-23389.pdf .

Docket: For access to the docket to read background documents or the electronic and written/paper comments received, go to and insert the docket number, found in brackets in the heading of this document, into the “Search” box and follow the prompts and/or go to the Dockets Management Staff, 5630 Fishers Lane, Rm. 1061, Rockville, MD 20852, 240-402-7500.

Elizabeth Giaquinto Friedman, Center for Drug Evaluation and Research, Food and Drug Administration, 10903 New Hampshire Ave., Bldg. 51, Rm. 4162, Silver Spring, MD 20993, 240-402-7930, [email protected] .

Advanced manufacturing is a term that describes an innovative pharmaceutical manufacturing technology or approach that has the potential to improve the reliability and robustness of the manufacturing process and resilience of the supply chain. Advanced manufacturing can: (1) integrate novel technological approaches, (2) use established techniques in an innovative way, or (3) apply production methods in a new domain where there are no defined best practices. Advanced manufacturing can be used for new or currently marketed large or small molecule drug products.

FDA has recognized and embraced the potential of advanced manufacturing. In 2014, CDER established the Emerging Technology Program (ETP) to work collaboratively with companies to support the use of advanced manufacturing. CDER observed a rapid emergence of advanced manufacturing technologies through the ETP and recognized that regulatory policies and programs may need to evolve to enable timely technological adoption.

The National Academies of Sciences, Engineering, and Medicine issued a 2021 report titled Innovation in Pharmaceutical Manufacturing on the Horizon: Technical Challenges, Regulatory Issues, and Recommendations, highlighting innovations in integrated pharmaceutical manufacturing processes. These innovations could have implications for measurement, modeling, and control technologies used in pharmaceutical manufacturing. AI may play a significant role in monitoring and controlling advanced manufacturing processes.

This discussion paper presents areas associated with the application of AI to pharmaceutical manufacturing that FDA has identified for consideration as FDA evaluates our existing risk-based regulatory framework. CDER scientific and policy experts identified these areas from a comprehensive analysis of existing regulatory requirements applicable to the approval of drugs manufactured using AI technologies. The areas of consideration in this discussion paper are those for which FDA would like public feedback.

There are additional areas of consideration not covered within this document, for example, difficulties that could result from ambiguity on how to apply existing regulations to AI or lack of Agency guidance or experience. The areas of consideration presented in this discussion paper focus on drug products that would be marketed under a new drug application (NDA), abbreviated new drug application (ANDA), or biologic license application (BLA). Public feedback will help inform CDER's evaluation of our existing regulatory framework.

While the initial analysis focused on products regulated by CDER, FDA's CBER has also encountered a rapid emergence of advanced manufacturing technologies associated with AI. As such, both CDER and CBER stakeholders are invited to provide feedback on the discussion questions.

Interested persons are invited to provide detailed comments to CDER and CBER on all aspects described in the discussion paper. To facilitate input, FDA has developed a series of questions based on the considerations articulated in the discussion paper. The questions are not meant to be exhaustive, and FDA is also interested in any other pertinent information stakeholders would like to share on this topic. In all cases, FDA encourages stakeholders to provide the specific rationale and basis for their comments, including any available supporting data and information.

Dated: February 24, 2023.

Lauren K. Roth,

Associate Commissioner for Policy.

[ FR Doc. 2023-04206 Filed 2-28-23; 8:45 am]


Reader Aids

Social media, information.

Exploring the Role of Artificial Intelligence in Enhancing Academic Performance: A Case Study of ChatGPT

2 Citations

On the robustness of chatgpt: an adversarial and out-of-distribution perspective.

Assessing the artificially intelligent workplace: an ethical framework for evaluating experimental technologies in workplace settings

Related Papers

Showing 1 through 3 of 0 Related Papers

ChatGPT and Generative Artificial Intelligence (AI): AI-generated content and citation

APA Citation Style: Artificial Intelligence (Including Chatbots) — February 2023

Outline of an APA Citation for AI Author/Creator(s). (Date created/updated) [[Name of AI generator]’s response to . . . [prompt query used]]. Date accessed. URL. Formatting: Double-space your reference list and use a 0.5 inch hanging indent for each entry.

Real world example: OpenAI. (2023, February 2). [ChatGPT's response to a prompt about First Nations in Ontario]. .

APA Citation Style: Artificial Intelligence (Including Chatbots) — February 2023

Chicago Manual of Style (17th Edition): Artificial Intelligence (Including Chatbots) — February 2023

Please note: The Chicago Manual of Style, 17th Edition, has not officially released recommendations for referencing ChatBots and other forms of Artificial Intelligence. Please be sure to check back frequently for any updates.

This citation recommendation is for use in class assignments at the University of Waterloo where you have been asked to use AI generated text as part of your work and provide a reference.


OpenAI's ChatGPT, response to query from author, February 15, 2023.


Author’s (Parent Company) Medium, Response to “Query in quotes.” Name of Website, Parent Company, Date accessed, URL.

OpenAI’s ChatGPT, Response to “Explain to general audiences the possible causes and effects of climate change.” ChatGPT, OpenAI, February 15, 2023, .

IEEE Citation Style: Artificial Intelligence (Including Chatbots) — February 2023

Please note: IEEE has stated Artificial Intelligence (AI) outputs, including products of chatbots, are not cited for publication purposes. Please be sure to check back frequently for any updates.

Outline of an IEEE Style Citation for Chatbots at the University of Waterloo [Citation number] Author (Program Name), response to author query. Publisher [Online]. URL, (Accessed date).

Real world example: [1] ChatGPT, response to author query. OpenAI [Online]. (accessed February 15, 2023).

Journal of the American Medical Association (JAMA) Style: Artificial Intelligence (Including Chatbots) — February 2023

JAMA believes ChatGPT and other generative AI Chatbot outputs do not qualify for authorship, so there is no standard of citation as any citation would require the artificial intelligence software to be listed as an author. Studies have shown it to be often erroneous, not up to date, unable to reliably cite evidence for its assertions, and lacking in critical judgment.

More detailed information on the JAMA stance toward AI Chatbots may be found in the paper:

Flanagin, A., Bibbins-Domingo, K., & Berkwits, M. (2023, Jan. 31). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge.

MLA Citation Style: Artificial Intelligence (Including Chatbots) — February 2023

Please note: As ChatGPT and similar AI technologies are rapidly evolving, citation styles may also change as new information becomes available. Please be sure to check back for any updates.

Some assignments do not permit the use of ChatGPT or AI tools. Please confirm with your course lecturer if a ChatBot tool is permitted. If using a ChatBot, you must cite the content used to avoid Academic Misconduct.

Outline of an MLA Citation for AI Author/Creator. "Name of chatbot." Title of platform where accessed, Full URL, Date Accessed (optional).

Real World Example: OpenAI. "ChatGPT." ChatGPT Pro, , February 2, 2023.

MLA Citation Style: Artificial Intelligence (Including Chatbots) — February 2023

Research guides by subject

Course reserves

My library account

Book a study room

News and events

Work for the library

Support the library

We want to hear from you. You're viewing the newest version of the Library's website. Please send us your feedback !

Artificial Intelligence Applications and Numerical Modelling in Structural Engineering

Total Downloads

About this Research Topic

The methods of artificial intelligence have been increasingly implemented in various areas of science and engineering, including civil and structural engineering, in recent years. The application of these new techniques in the field of structural engineering can greatly facilitate the process of structural analysis and design. In this Research Topic, authors are invited to present their recent research outcomes in the area of artificial intelligence applications in structural engineering. In this context, submissions related to simulation of structural behavior using numerical methods as well as the applications of various new optimization and machine learning methodologies are welcome. This article collection also aims to publish the results of original experimental studies. The major goal of this Research Topic is to broaden the knowledge base in the area of possible applications of new machine learning and optimization technologies in the field of structural engineering. The efficient application of these techniques can have great benefits in terms of both cost reduction and environmental impact. In this context original research work as well as review articles related to experimental and numerical studies are both welcome. One of the major hurdles in the way of wide scale adoption of machine learning techniques in structural engineering is the availability of high quality data sets. Therefore researchers are encouraged to share the data sets that they utilized in the development of their machine learning models. The scope of this article collection will include any original research work or review paper related to structural engineering, numerical simulation, optimization and machine learning applications. Also, researchers can publish their experimental research outcome. Novel meta-heuristic optimization and artificial intelligence techniques can be presented. The following list contains some of the possible research areas where submissions are expected: • Applications of fiber reinforced polymers in structural retrofitting; • Finite element analysis of laminated composite plates, steel and concrete structural members; • Optimization of structural cost and weight using novel optimization techniques; • Data-driven prediction of structural performance using machine learning algorithms; • Experimental and numerical analysis of structural stability and vibration; • Predictive modeling using regression techniques.

Keywords : Composite structures, Steel and concrete structures, Fiber reinforced polymers, Finite element analysis, Artificial intelligence, Machine learning, Optimization

Important Note : All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Topic coordinators, submission deadlines, participating journals.

Manuscripts can be submitted to this Research Topic via the following journals:

No records found

total views views downloads topic views

Top countries

Top referring sites, about frontiers research topics.

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Search form


From passing medical school exams to publishing articles in academic journals, it’s no secret that ChatGPT is taking the medical world—and the world at large—by storm. While that’s sparked some concerns about the right balance of using AI in medicine, it’s also opened up a whole new world of opportunities for radiology practices that take pride in being on the cutting-edge of technology. 

In a Feb. 28 paper for Diagnostic and Interventional Imaging , the authors—who reveal that ChatGPT itself wrote most of the paper—explore ways that radiologists can use ChatGPT. They largely focus on academic writing, but also note some practical ways for reading radiologists to capitalize on the AI-powered tool's abilities.  [1]  

Here, we highlight and expand upon possible ways that all types of radiologists can use ChatGPT: to research and write academic papers, enhance clinical decision-making, and improve patient communication and care. 

ChatGPT for clinical radiologists

Implement chatgpt as a chatbot for patient inquiries. .

One of ChatGPT’s most groundbreaking abilities is its natural language processing, which allows it to skillfully handle text-based chat sessions and respond with natural-sounding language. 

Practices can consider putting a chatbot on their website to be a “first line of defense” when fielding patient inquiries (which requires the right plugin, and may take some technological prowess to get it right). The paper's  authors note that ChatGPT can answer questions about medical procedures and examinations (“How should I prepare for an MRI? Can I get a CT scan while I’m pregnant?), results, and follow-up recommendations. Additionally, practices can even train a chatbot to answer questions about the practice, such as opening hours, contact numbers, and fees. 

If successful, staff can spend less time answering emails or phone calls and more time focusing on patient care. 

Support clinical-decision making. 

ChatGPT is language-based, so it can’t analyze any radiological images. However, it can serve as a handy assistant for radiologists who know how to use take advantage of its knowledge base. 

The Diagnostic  and Interventional Imaging  paper explains that ChatGPT can share information on analysis methodologies, techniques, and relevant papers, which can help in the analysis. When provided with context and background information, the authors note, ChatGPT can even generate captions or legends for radiological images.

Radiologists shouldn't be afraid to get creative when asking for assistance. If a reading radiologist sees an uncommon abnormality on an image, for example, they can ChatGPT about possible underlying causes to make sure they're not overlooking any. Or if a radiologist wants to make sure they're following the latest clinical guidelines on a certain topic, they can ask ChatGPT to provide them with the most recent, most relevant literature. The goal is not to depend on it, but to use it as a support tool—think of ChatGPT as a highly intelligent search engine. Just be careful to fact-check its answers, as it can sometimes make mistakes. 

Enhance patient communication and follow-up care. 

In a paper by University of Munich researchers , ChatGPT is also explored for its abilities to simplify existing radiology reports to make them more readable for the average patient. [2] The AI can remove and replace medical jargon, helping patients gain a better understanding of their diagnosis and what it means. The authors found some inconsistencies, and warn that it's important to read and revise as necessary—but ChatGPT can still be a great head start. 

Additionally, every radiologist knows that a diagnosis isn’t the end of the patient care journey. ChatGPT can help draft tailored recommendations for follow-up care based on input about a patient’s individual circumstances, can craft email text to send to patients, and more—once again, however, it's important for radiologists to read through and correct the information for any errors or inaccuracies before anything gets passed to a patient. 

ChatGPT for academic radiologists

Suggest impactful and engaging titles for research articles .

Simply by giving ChatGPT information about the research topic, research question, and the main findings, the authors of the Diagnostic  and Interventional Imaging  paper  note, ChatGPT can generate a long list of suggested titles for academic papers. That can help academic radiologists save time and focus on gathering and analyzing data. 

Assist with structure, format, and drafting of a research paper

While ChatGPT shouldn't be relied upon to actually write the contents of an academic paper in its entirety, the authors note that it can be quite helpful in offering suggestions for how to structure the paper and what elements to include in each section, perhaps even offering "starter" language for the introduction section when prompted with enough information about the research question. For authors who are writing in a second language, ChatGPT may also be helpful by providing assistance with translation as well as editing a draft for grammar and clarity. 

Formatting the bibliography of a research paper

While it's up to the academic radiologist to make sure that a paper is accurately and appropriately sourced, ChatGPT can step in when it comes to formatting citations for the bibliography section. Radiologists can provide ChatGPT with the source they used and ask for it to be formatted in a number of different writing styles (APA, MLA, Chicago, etc.). 

Paving the way for a more intelligent future 

"The best use of ChatGPT in radiology will vary depending on the specific needs and goals of the organization and individual radiologists," the  Diagnostic  and Interventional Imaging  paper notes. 

Once radiologists start looking at ChatGPT as a true partner with access to an enormous amount of information, they'll start seeing new ways to use it everywhere you turn—and hopefully, both papers and practices will be better for it. 

Related Content:

Cardiologists ask popular ai model chatgpt to answer questions about cardiology, ai program chatgpt now has a published article in radiology—is it any good, 8 trends in radiology technology to watch in 2023.

1. A. Lecler, L. Duron and P. Soyer, Revolutionizing radiology with GPT-based models: Current applications, future possibilities and limitations of ChatGPT, Diagnostic and Interventional Imaging (2023),, Diagnostic and Interventional Imaging 000 (2023) 1−6

2. Jeblick, Katharina & Schachtner, Balthasar & Dexl, Jakob & Mittermeier, Andreas & Stüber, Anna & Topalis, Johanna & Weber, Tobias & Wesp, Philipp & Sabel, Bastian & Ricke, Jens & Ingrisch, Michael. (2022). ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. 10.48550/arXiv.2212.14882.

Around the web

There are a number of well-intended recommendations for preventing MRI injuries, but many of them stop short of implementing safety  requirements .

Caption Health has been a major player in the cardiac imaging space in recent years, gaining attention for its AI-powered echocardiography solutions. 

American Society of Nuclear Cardiology (ASNC) 2023 President Mouaz H. Al-Mallah, MD, said the subspecialty needs to up its game with new technology.

Related Articles on Artificial Intelligence, Economics


  1. Research paper artificial intelligence. 10 most impressive Research Papers around Artificial

    first artificial intelligence research paper

  2. 002 Artificial Intelligence Research Paper Screenhunter 06 Nov 25 11 ~ Museumlegs

    first artificial intelligence research paper

  3. Research paper artificial intelligence. 10 most impressive Research Papers around Artificial

    first artificial intelligence research paper

  4. Research paper artificial intelligence. 10 most impressive Research Papers around Artificial

    first artificial intelligence research paper

  5. Top 3 Artificial Intelligence Research Papers

    first artificial intelligence research paper

  6. 006 Research Paper Largepreview Artificial Intelligence ~ Museumlegs

    first artificial intelligence research paper


  1. Research paper about Artificial Intelligence

  2. What is Artificial Intelligence? Intro to AI[GCSE COMPUTER SCIENCE]

  3. Is Artificial Intelligence Relevant To All Industries?

  4. TEASER: Episode 5 (Artificial Intelligence/Deep Learning)

  5. Introduction to Artificial Intelligence

  6. Introduction to Artificial Intelligence


  1. The History of Artificial Intelligence

    It's considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956.

  2. Artificial intelligence

    The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing.

  3. How artificial intelligence is transforming the world

    Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those...

  4. (PDF) Research paper on Artificial Intelligence

    (PDF) Research paper on Artificial Intelligence Research paper on Artificial Intelligence Authors: Ashutosh Kumar Rachna Priya Swarna Kumari Discover the world's research fin AI...

  5. PDF The History of Artificial Intelligence

    This paper is about examining the history of artificial intelligence from theory to practice and from its rise to fall, highlighting a few major themes and advances. 'Artificial' intelligence The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference on the subject.

  6. (PDF) Artificial Intelligence

    The American University of the Middle East Abstract and Figures This paper focus on the History of A.I. and how it begun as an idea and, the definition of artificial intelligence and gives a...

  7. Research Paper on Artificial Intelligence and Its Role in ...

    RESEARCH PAPER ON ARTIFICIAL INTELLIGENCE AND ITS ROLE IN CURRENT WORLD WhatsApp Support +919426033211 Journal of Emerging Technologies and Innovative Research ( An International Scholarly Open Access Journal, Peer-reviewed, Refereed Journal )

  8. Some recent work in artificial intelligence

    Metrics. Abstract: This paper will review certain approaches to artifical intelligence research--mainly work done since 1960. An important area of research involves designing a machine that can adequately improve its own performance as well as solve other problems normally requiring human intelligence. Work in heuristic programming that seems ...

  9. [PDF] The Impact of Artificial Intelligence on Workers' Skills

    Aim/Purpose: This paper aims to investigate the recent developments in research and practice on the transformation of professional skills by artificial intelligence (AI) and to identify solutions to the challenges that arise. Background: The implementation of AI in various organisational sectors has the potential to automate tasks that are currently performed by humans or to reduce cognitive ...

  10. Research Paper on Artificial Intelligence

    HISTORY OF ARTIFICIAL 1980's ONWARDS :- INTELLIGENCE In the 1980s, neural networks became broadly used The academic roots of AI, and the concept of with the back broadcast algorithm, first describe by intelligent machines, May be found in Greek PAUL JOHN WERBOS in 1974. By 1985 the Mythology.

  11. A Brief History of Artificial Intelligence

    1956: John McCarthy presents his definition of artificial intelligence. [2] 1956-1974: Reason searches or means-to-end algorithms were first developed to "walk" simple decision paths and make decisions. [3] Such approaches provided the ability to solve complex mathematical expressions and process strings of words.

  12. 10 most impressive Research Papers around Artificial Intelligence

    The paper specifically describes one such system, ID3, in detail. Additionally, the paper discusses a reported shortcoming of the basic algorithm, besides comparing the two methods of overcoming it. To conclude the paper, the author presents illustrations of current research directions. Apple published its first artificial intelligence research ...

  13. What is Artificial Intelligence (AI) ?

    1956: John McCarthy coins the term 'artificial intelligence' at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.

  14. AI Papers to Read in 2022

    Haibe-Kains, Benjamin, et al. "Transparency and reproducibility in artificial intelligence." Nature 586.7829 (2020): E14-E16. This entry is not your average research paper. Instead, this is an open letter denouncing Google's Breast Cancer AI team for its, let's say, incomplete paper.

  15. How AI technology can tame the scientific literature

    Colavizza says that tools such as — free for basic queries, but costing upwards of €20,000 (US$23,000) a year for premium access, which allows more-nuanced searches — can accelerate ...

  16. PDF The Impact of Artificial Intelligence on Innovation

    NBER WORKING PAPER SERIES THE IMPACT OF ARTIFICIAL INTELLIGENCE ON INNOVATION Iain M. Cockburn ... Massachusetts Avenue Cambridge, MA 02138 March 2018 The authors would like to thank the organizers and participants at the first NBER conference on the Economics of Artificial Intelligence, and in particular our discussant Matthew Mitchell for ...

  17. research paper.docx

    analysis, the results shown in the peer-reviewed article, "Use of Artificial Intelligence for Gender Bias Analysis in Letters of Recommendation for General Surgery Residency Candidates," that "A total of 611 LoRs were analyzed for 171 applicants (16.4% female), and 95.3% of letter authors were male" (Sarraf et al. 2021.) The underlying biases are derived from different analytical techniques ...

  18. Impact of an Artificial Intelligence Research Frame on the Perceived

    Artificial Intelligence (AI) is attracting a great deal of attention and it is important to investigate the public perceptions of AI and their impact on the perceived credibility of research evidence. In the literature, there is evidence that people overweight research evidence when framed in neuroscience findings. In this paper, we present the findings of the first investigation of the impact ...

  19. Artificial Intelligence authors/titles recent submissions

    Andy Liu, Hao Zhu, Emmy Liu, Yonatan Bisk, Graham Neubig. Comments: 9 pages, 3 figures. To be published in the 11th International Conference on Learning Representations, ICLR 2023, Conference Track Proceedings. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)

  20. Top-10 Research Papers in AI

    The Pub Artificial Intelligence, Pornography and a Brave New World Dariusz Gross #DATAsculptor in 3D Reconstruction with 3D Diffusion Models: An AI Artist's Method Diego Bonilla Top Deep Learning Papers of 2022 Renu Khandelwal Reinforcement Learning: Temporal Difference Learning Help Status Writers Blog Careers Privacy Terms About

  21. Apple Published Its First Artificial Intelligence Research Paper

    Apple's paper comes after Facebook's head of AI Yann LeCun said that Apple's reluctance to let researchers publish their work could be hindering its hiring efforts in the highly competitive field ...

  22. AIJ

    The journal of Artificial Intelligence (AIJ) welcomes papers on broad aspects of AI that constitute advances in the overall field including, but not limited to, cognition and AI, automated reasoning and inference, case-based reasoning, commonsense reasoning, computer vision, constraint …. View full aims & scope. Visit IJCAI.

  23. Papers with Code

    The artificial intelligence (AI) system has achieved expert-level performance in electrocardiogram (ECG) signal analysis. However, in underdeveloped countries or regions where the healthcare information system is imperfect, only paper ECGs can be provided.

  24. Federal Register :: Discussion Paper: Artificial Intelligence in Drug

    The discussion paper presents areas for consideration and policy development identified by the Center for Drug Evaluation and Research (CDER) scientific and policy experts associated with application of artificial intelligence (AI) to pharmaceutical manufacturing.

  25. Exploring the Role of Artificial Intelligence in Enhancing Academic

    DOI: 10.2139/ssrn.4312358 Corpus ID: 255624599; Exploring the Role of Artificial Intelligence in Enhancing Academic Performance: A Case Study of ChatGPT @article{Alshater2022ExploringTR, title={Exploring the Role of Artificial Intelligence in Enhancing Academic Performance: A Case Study of ChatGPT}, author={M. M. Alshater}, journal={SSRN Electronic Journal}, year={2022} }

  26. Research guides: ChatGPT and Generative Artificial Intelligence (AI

    Please note: IEEE has stated Artificial Intelligence (AI) outputs, including products of chatbots, are not cited for publication purposes. Please be sure to check back frequently for any updates. This citation recommendation is for use in class assignments at the University of Waterloo where you have been asked to use AI generated text as part ...

  27. Artificial Intelligence Applications and Numerical Modelling in

    The methods of artificial intelligence have been increasingly implemented in various areas of science and engineering, including civil and structural engineering, in recent years. The application of these new techniques in the field of structural engineering can greatly facilitate the process of structural analysis and design. In this Research Topic, authors are invited to present their recent ...

  28. 6 ways radiologists can use ChatGPT

    They largely focus on academic writing, but also note some practical ways for reading radiologists to capitalize on the AI-powered tool's abilities. [1] Here, we highlight and expand upon possible ways that all types of radiologists can use ChatGPT: to research and write academic papers, enhance clinical decision-making, and improve patient ...