I realize I'm responding to this a little late, but I think it's fairly important to understand how very incorrect this first paragraph is.
AI is an extremely common tool in certain fields, and has been used for years - arguably for decades depending on where you think "machine learning" ends and "artificial intelligence" begins. For a basic example, take a look at this high content imaging system from Molecular Devices:
ImageXpress Confocal HT.ai High-Content Imaging System This is more information about the software it runs:
Advanced Cloud-Based Analytics with StratoMineR
Now, take a look at some of the applications they advertise it for: Cellular Imaging & Analysis, Drug Discovery & Development, Stem Cell Research, Toxicology. These are not pie-in-the-sky dreams that they hope to develop in the next 60 years. These are real tasks being performed with AI by companies that buy this product and use it for exactly those things. Today.
And that product is not unique. Here's a similar use of AI for tissue analysis (check out Research>Analysis Examples for an idea of what it can identify):
Oncotopix® Discovery - AI deep learning for pathology tissue image analysis And here's another that was previously specialized for neuroscience and recently rebranded to widen their user base:
Rewire AI Take a look at their "Rewire is trusted by" section to get an idea of some of the places that use this stuff.
Now, if you want to try and trivialize this by saying the AI "doesn't really understand anything" or "isn't aware", that's fine. I don't have any desire to get into a philosophical arguments about what knowledge or consciousness is. But to say that AI can't give trustworthy advice or give accurate information is ignoring a reality that we already live in.