Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Sunday, 10 November 2019

Microsoft developing AI tool to diagnose cervical cancer faster

Image Source : news.microsoft.com
World Health Organization (WHO) estimates that cervical cancer is the fourth most frequent cancer among women worldwide. India accounts for 16% of global cervical cancer patients and an increase in demand for cervical cancer demand has been worryingly seen.

SRL Diagnostics is the largest diagnostics laboratory in India and they receive around 100,000 pap smear samples every year. A cytopathologist in their Mumbai laboratory alone screens around 200 slides for cervical cancer every day in addition to 100 other slides for other types of cancer.

To solve the problem of the low proportion of cytopathologists available, SRL Diagnostics partnered with Microsoft to create an AI network that would have an API to enable screening out of normal slides, so that the cytopathologists could concentrate on the abnormal slides. This would give a huge boost in the overall speed of screening as only 2% of all samples turn out to be abnormal and need deeper analysis.

To develop the AI algorithm, cytopathologists studied digitally scanned versions of Whole Slide Imaging (WSI) slides manually and recorded their observations, which in turn was used as training data for the AI model. Initially only one cytopathologist was assigned by SRL Diagnostics but since each WSI consists of 1800 tile images, this task proved to be too burdensome for an individual. The matter of subjectivity was also something the AI algorithm had to adjust for.


A digitally scanned image of a WSI slide, used to train the AI model. | Source : news.microsoft.com

“Different cytopathologists examine different elements in a smear slide in a unique manner even if the overall diagnosis is the same. This is the subjectivity element in the whole process, which many a time is linked to the experience of the expert,” reveals Dr. Arnab Roy, Technical Lead New Initiatives & Knowledge Management, SRL Diagnostics

To address the burden of volume, five cytopathologists were assigned to the task across multiple labs in different locations. This lead to thousands of tile images of cervical smear being annotated. Discordant and concordant notes were created for each sample image. Each sample image with discordant notes from a minimum of three cytopathologists were then sent to senior cytopathologists. This is how the issue of subjectivity was handled.

This is the first such AI-Device-Labs setup in the Histopathology space in this part of the world and impacts the entire spectrum of stakeholders. For patients, it reduces the turnaround time for diagnosis and onset of treatment. It lends better productivity and accuracy to the efforts of cytopathologists. For doctors, it offers insights that inspire more qualitative treatment decisions. 

“With the growing burden of cancer, there is a need to quickly and accurately analyze the samples to help clinicians arrive at a diagnosis faster and with a higher degree of objectivity. The work done by the SRL-Microsoft consortium in developing deep learning-based algorithms as an assistive tool in a relatively short span of time, speaks volumes about the capabilities of both the partners. This particular cervical cancer AI API shall be useful in screening liquid-based cytology slide images, unlocking precious dead-time of the pathologists enabling them to report more cases and/or focus more on complicated cases,” adds Arindam Haldar, CEO, SRL Diagnostic.

In August this year SRL Diagnostics launched an internal preview of the API. In a span of three to six months the AI model will be put through rigorous clinical validation protocols. More than half a million anonymized digital tile images will be used in this exercise, making it one of the largest of its kind. Following internal validation, the API will be used in external cervical cancer diagnostics, including hospitals and other diagnostic centers.


This is one of the latest news about Microsoft using the power of AI to solve real world problems in India. Last week they had announced how their HAMS project was helping to automate driving tests in India. You can read about that project here.

Monday, 28 October 2019

Robotic hand made by Elon Musk's OpenAI learns to solve Rubik's Cube

Image Source : OpenAI Blog

Last year we were amazed by the level of dexterity achieved by OpenAI's Dactyl system which was able to learn how to manipulate a cube block to display any commanded side/face.If you missed that article, read about it here.

OpenAI then set themselves a harder task of teaching the robotic hand to solve a Rubik's cube. Quite a daunting task made no easier by the fact that it would use one hand which most humans would find it hard to do. OpenAI harnessed the power of neural networks which are trained entirely in simulation. However, one of the main challenges faced was to make the simulations as realistic as possible because physical factors like friction, elasticity etc. are very hard to model.

The solution they came up with was a new method called Automatic Domain Randomization which endlessly generates progressively more difficult environments for the simulations to solve the Rubik's cube in. This ensures that real world physics gets covered in the spectrum of environments generated and hence bypasses the need to train the simulations on highly accurate environmental models.  

One of the parameters randomized was the size of the Rubik’s Cube. ADR begins with a fixed size of the Rubik’s Cube and gradually increases the randomization range as training progresses. The same technique is applied to all other parameters, such as the mass of the cube, the friction of the robot fingers, and the visual surface materials of the hand. The neural network thus has to learn to solve the Rubik’s Cube under all of those increasingly more difficult conditions.

Here is an uncut version of the robot hand solving the Rubik's cube:


To test the limits of this method, they experimented with a variety of perturbations while the hand is solving the Rubik’s Cube. Not only does this test for the robustness of the control network but also tests the vision network, which is used to estimate the cube’s position and orientation. It was found that the system trained with ADR is surprisingly robust to perturbations. The robot can successfully perform most flips and face rotations under all tested perturbations, though not at peak performance.

The impressive robustness of the robot hand to perturbations can be seen in this video:



Tuesday, 7 August 2018

AI powered face-recognition system to be used in 2020 Olympics


Image Source : s3.reutersmedia.net
Japan is the land which gave us technological inventions like the Walkman, VHS, Bullet Train, Pocket Calculator, Laptop and many more which have changed the way we live. So it is no surprise to know that they have decided to utilize the power of the latest technology that is rapidly revolutionizing the world we live in - Artificial Intelligence (A.I.)

NEC has built this technology that will allow athletes, officials and others accredited for the games to have hassle free access to restricted areas by letting the system recognize their faces. The identity cards given to accredited individuals by the organizers will have their facial data which will be collected beforehand. The individuals will have to hold up the cards to a terminal present at each security check point while looking into the camera to cross verify their identity.  

NEC is a global leader in technologies that perform identification using facial recognition, iris scanners, fingerprints, palm prints, finger vein, voice and ear acoustics. The Terminals are going to use its "Bio-IDiom" which has reportedly been consecutively named the world's top face recognition technology four times by the U.S. National Institute of Standards and Technology.

Here is a video of the Terminal in Action:
 


This means that the process of verification will get faster and also more secure as users will not be able to pass on their access cards to others for misuse. “This latest technology will enable strict identification of accredited people compared with relying solely on the eyes of security staff, and also enables swift entry to venues — which will be necessary in the intense heat of summer. I hope this will ensure a safe and secure Olympic and Paralympic Games and help athletes perform at their best.” said Tsuyoshi Iwashita, the security executive director for the games.


Tuesday, 31 July 2018

Elon Musk's startup builds AI to make robotic hand move like humans

Image Source : OpenAI Blog


OpenAI is a company that was co-founded by Elon Musk in 2015 as a non profit research company that aims to discover and enact the path to safe artificial general intelligence (AGI).

One of the most remarkable features that evolution has bestowed upon us other than our brain is our hands. It is a belief among many scientists that our opposable thumbs are in fact, what allowed us to become a superior species ahead of other highly intelligent creatures like Dolphins and Elephants.

In a blog post published by OpenAI on Monday, they claim to have harnessed the power of AI and deep learning to bestow the dexterity of the human hand to robots. Their system named Dactyl is trained entirely in simulation and is able to apply this training in the real world.

Here are the examples of the complex movements the robotic hand is capable of performing:

 
Video source : blog.openai.com

The degree of freedom of a robot, to explain in a simplified way, is the number of ways it can move. In most Industrial applications a robotic arm with 7 degrees of freedom is considered quite advanced. The Dactyl trained arm of OpenAI has 24 degrees of freedom. Furthermore, it has the capability to work with partial information from its sensors and manipulate objects of different geometry.

Here is a schematic of how OpenAI trains the Robot:
Image Source : OpenAI Blog





As can be understood by the above illustration,  the robot is trained entirely in simulation. This allows it to be taught much faster. Also, the setup uses normal RGB cameras to see the object by running orientation estimation algorithms in neural networks. This means that it does not need special objects that are designed for camera tracking, to function. 



This can have amazing applications in handling objects harmful to humans. The success of the technology could be extrapolated to other movements possible by humans to one day build complete humanoid robots like the ones we saw in the Movie "I,Robot".


Friday, 27 July 2018

Google unveils tiny AI chips for offline Machine Learning inference


Google's Edge TPU | Image Source : cdn.vox-cdn.com

Machine Learning services provided by Google until now have completely been cloud based. This means that the Cloud had to be used not only for storing Data but also for analysis and inference by the Machine Learning algorithms being run by Google's Tensor Processing Units (TPU) located in its data centers.

Google's TPU | Image source : cdn.vox-cdn.com

As you can very well guess, this kind of setup has the drawbacks of being dependent on internet connectivity and being more vulnerable to attacks by hackers trying to steal live machine data. This has been one of the main reasons that OEMs (Original Equipment Manufacturers) have been reluctant to utilize Google's Machine Learning services.  

The newly unveiled Edge TPU seeks to overcome that hurdle by providing the inference part locally on the device to which it is attached. The customer can store older machine data in Google's cloud, use it to train the Edge TPUs and then integrate them into their Machines to provide intelligent inference without having to connect to the cloud.

Here is Google's illustration explaining the setup:
Click to view larger | Image Source : blog.google
Google cloud's Vice President of IoT, Injon Rhee said “Edge TPUs are designed to complement our Cloud TPU offering, so you can accelerate ML training in the cloud, then have lightning-fast ML inference at the edge. Your sensors become more than data collectors — they make local, real-time, intelligent decisions.”

Google is also making a development kit available so that users can test out the technology before deciding to incorporate it into there machines. It has a system on module (SOM) that combines Google’s Edge TPU, a NXP CPU, Wi-Fi, and Microchip’s secure element in a compact form factor.

Image Source : blog.google
 Here are what some of Google's customers are saying about the new technology:

“Our Intelligent Vision Inspection solution enables us to deliver enhanced quality and efficiency in the factory operations of various LG manufacturing divisions. With Google Cloud AI, Google Cloud IoT Edge, and Edge TPU, combined with our conventional MES systems and years of experience, we believe Smart Factory will become increasingly more intelligent and connected,” says Shingyoon Hyun, the CTO of LG CNS. “With Intelligent Vision Inspection, we are eager to make a better working place, raise the quality of product, and save millions of dollars each year. Google Cloud AI and IoT technologies with LG CNS expertise make this possible.”

"Smart Parking enables our customers to deploy and manage frictionless parking services for both on-street and off-street situations. We are very excited about our ability to use Cloud IoT Edge and Edge TPU for building ML-enabled parking experiences for our customers,” says John Heard, CTO of Smart Parking. “At Smart Parking, our mission is to re-invent the parking experience for every solution user. The introduction of Cloud IoT Edge, Google Cloud IoT enables us to deliver on this promise in new ways within our SmartSpot gateway products.”

“At XEE, we’re working to make driving simpler, safer and more economical through our connected car platform,” explains Romain Crunelle, CTO at XEE. “Cloud IoT Edge and Edge TPU will help us to address use cases such as driving analysis, road condition analysis, and tire wear and tear in real time and in a much more cost efficient and reliable way. Enabling accelerated ML inference at the edge will enable the XEE platform to analyze images and radar data faster from the connected cars, detect potential driving hazards and alert drivers with real-time precision."

"Trax is helping retailers build a sound foundation for digital transformation,” says David Gottlieb, General Manager, Global Retail at Trax. “Cloud IoT Edge and Edge TPU will help address critical use cases such as improving on shelf availability (OSA), optimizing click-and-collect processes, and modernizing the shopping experience. This Google technology will enable accelerated machine learning at the edge—in-store images are captured and flowed through the Trax platform, where those digitized shelf images are analyzed at an increasingly faster rate providing retailers with the agility to both respond to issues in real time and to consistently delight shoppers.”


With all major companies pushing towards Industry 4.0 solutions, the Edge TPUs could really help Google bound ahead in capturing the 11.1 Trillion Dollar IoT market predicted by McKinsey. 




Do leave your comments and thoughts below.