Developments


Deep learning and AI have moved well beyond science fiction into the cutting edge of internet and enterprise computing. Access to more computational power in the cloud, advancement of sophisticated algorithms, and the availability of funding are unlocking new possibilities unimaginable just five years ago. But it’s the availability of new, rich data sources that is making deep learning real. To advance the state of the art, developers and data scientists need to carefully select the underlying databases that manage the input, training, and results data. Many software-platforms are already helping teams realize the potential of AI.

Some Applied Fields for AI

Dugongs Monitoring

On the west coast of Australia, Amanda Hodgson is launching drones out towards the Indian Ocean so that they can photograph the water from above. The photos are a way of locating dugongs, or sea cows, in the bay near Perth—part of an effort to prevent the extinction of these endangered marine mammals. The trouble is that Hodgson and her team don't have the time needed to examine all those aerial photos. There are too many of them—about 45,000—and spotting the dugongs is far too difficult for the untrained eye. So she's giving the job to a deep neural network.

Neural networks are the machine learning models that identify faces in the photos posted to your Facebook news feed. They also recognize the questions you ask your Android phone, and they help run the Google search engine. Modeled loosely on the network of neurons in the human brain, these sweeping mathematical models learn all these things by analyzing vast troves of digital data. Now, Hodgson, a marine biologist at Murdoch University in Perth, is using this same technique to find dugongs in thousands of photos of open water, running her neural network on the same open-source software, TensorFlow, that underpins the machine learning services inside Google.

As Hodgson explains, detecting these sea cows is a task that requires a particular kind of pinpoint accuracy, mainly because these animals feed below the surface of the ocean. "They can look like whitecaps or glare on the water," she says. But that neural network can now identify about 80 percent of dugongs spread across the bay.
The project is still in the early stages, but it hints at the widespread impact of deep learning over past year. In 2016, this very old but newly powerful technology helped a Google machine beat one of the world's top players at the ancient game of Go—a feat that didn't seem possible just a few months before. But that was merely the most conspicuous example. As the year comes to a close, deep learning isn't a party trick. It's not niche research. It's remaking companies like Google, Facebook, Microsoft, and Amazon from the inside out, and it's rapidly spreading to the rest of the world, thanks in large part to the open source software and cloud computing services offered by these giants of the internet.

The New Translation

In previous years, neural nets reinvented image recognition through apps like Google Photos, and they took speech recognition to new levels via digital assistants like Google Now and Microsoft Cortana. This year, they delivered the big leap in machine translation, the ability to automatically translate speech from one language to another. In September, Google rolled out a new service it calls Google Neural Machine Translation, which operates entirely through neural networks. According to the company, this new engine has reduced error rates between 55 and 85 percent when translating between certain languages.
Google trains these neural networks by feeding them massive collections of existing translations. Some of this training data is flawed, including lower quality translations from previous versions of the Google Translate app. But it also includes translations from human experts, and this buoys the quality of the training data as a whole. That ability to overcome imperfection is part of deep learning's apparent magic: given enough data, even if some is flawed, it can train to a level well beyond those flaws.

Mike Schuster, a lead engineer on Google's service, is happy to admit that his creation is far from perfect. But it still represents a breakthrough. Because the service runs entirely on deep learning, it's easier for Google to continue improving the service. It can concentrate on refining the system as a whole, rather than juggling the many small parts that characterized machine translation services in the past.
Meanwhile, Microsoft is moving in the same direction. This month, it released a version of its Microsoft Translator app that can drive instant conversations between people speaking as many as nine different languages. This new system also runs almost entirely on neural nets, says Microsoft vice president Harry Shum, who oversees the company's AI and research group. That's important, because it means Microsoft's machine translation is likely to improve more quickly as well.

The New Chat

In 2016, deep learning also worked its way into chatbots, most notably the new Google Allo. Released this fall, Allo will analyze the texts and photos you receive and instantly suggest potential replies. It's based on an earlier Google technology called Smart Reply that does much the same with email messages. The technology works remarkably well, in large part because it respects the limitations of today's machine learning techniques. The suggested replies are wonderfully brief, and the app always suggests more than one, because, well, today's AI doesn't always get things right.
Inside Allo, neural nets also help respond to the questions you ask of the Google search engine. They help the company's search assistant understand what you're asking, and they help formulate an answer. According to Google research product manager David Orr, the app's ability to zero in on an answer wouldn't be possible without deep learning. "You need to use neural networks—or at least that is the only way we have found to do it,” he says. “We have to use all of the most advanced technology we have.”
What neural nets can't do is actually carry on a real conversation. That sort of chatbot is still a long way off, whatever tech CEOs have promised from their keynote stages. But researchers at Google, Facebook, and elsewhere are exploring deep learning techniques that help reach that lofty goal. The promise is that these efforts will provide the same sort of progress we've seen with speech recognition, image recognition, and machine translation. Conversation is the next frontier.


The New Data Center

This summer, after building an AI that cracked the game of Go, Demis Hassabis and his Google DeepMind lab revealed they had also built an AI that helps operate Google's worldwide network of computer data centers. Using a technique called deep reinforcement learning, which underpins both their Go-playing machine and earlier DeepMind services that learned to master old Atari games, this AI decides when to turn on cooling fans inside the thousands of computer servers that fill these data centers, when to open the data center windows for additional cooling, and when to fall back on expensive air conditioners. All told, it controls over 120 functions inside each data center
As Bloomberg reported, this AI is so effective, it saves Google hundreds of millions of dollars. In other words, it pays for the cost of acquiring DeepMind, which Google bought for about $650 million in 2014. Now, Deepmind plans on installing additional sensors in these computing facilities, so it can collect additional data and train this AI to even higher levels.

The New Cloud
As they push this technology into their own products as services, the giants of the internet are also pushing it into the hands of others. At the end of 2015, Google open sourced TensorFlow, and over the past year, this once-proprietary software spread well beyond the company's walls, all the way to people like Amanda Hodgson. At the same time, Google, Microsoft, and Amazon began offering their deep learning tech via cloud computing services that any coder or company can use to build their own apps. Artificial intelligence-as-a-service may wind up as the biggest business for all three of these online giants.

As AI evolves, the role of the computer scientist is changing.

Over the last twelve months, this burgeoning market spurred another AI talent grab. Google hired Stanford professor Fei-Fei Li, one of the biggest names in the world of AI research, to oversee a new cloud computing group dedicated to AI, and Amazon nabbed Carnegie Mellon professor Alex Smolna to play much the same role inside its cloud empire. The big players are grabbing the world's top AI talent as quickly as they can, leaving little for others. The good news is that this talent is working to share at least some of the resulting tech they develop with anyone who wants it.
As AI evolves, the role of the computer scientist is changing. Sure, the world still needs people who can code software. But increasingly, it also needs people who can train neural networks, a very different skill that's more about coaxing a result from the data than building something on your own. Companies like Google and Facebook are not only hiring a new kind of talent, but also reeducating their existing employees for this new future—a future where AI will come to define technology in the lives of just about everyone.



Icebergs Detection

Poor detection and drifting icebergs are major threats to marine safety and physical oceanography. As a result of these factors, ships can sink, thereby causing a major loss of human life. To monitor and classify the object as a ship or an iceberg, Synthetic Aperture Radar (SAR) satellite images are used to automatically analyze with the help of deep learning. In this experiment, the Kaggle* iceberg dataset (images provided by the SAR satellite) was considered, and the images were classified using the AlexNet topology and Keras library. The experiments were performed on Intel® Xeon® Gold processor-powered systems, and a training accuracy of 99 percent and inference accuracy of 86 percent were achieved.

An iceberg is a large chunk of ice that has been calved from a glacier. Icebergs come in different shapes and sizes. Because most of an iceberg’s mass is below the water surface, it drifts with the ocean currents. This poses risks to ships and their navigation and infrastructure. Currently, many companies and institutions are using aircrafts and shore-based support to monitor the risk from icebergs. This monitoring is challenging in harsh weather conditions and remote areas.
To mitigate these risks, Statoil, an international energy company, is working closely with the Centre for Cold Ocean Resources Engineering (C-Core) to remotely sense icebergs and ships using SAR satellites. The SAR satellites are not light dependent and can capture images of the targets even in darkness, clouds, fog, and harsh weather conditions. The main objective of this experiment on Intel® architecture was to automatically classify a satellite image as an iceberg or a ship.

For this experiment, AlexNet topology with the Keras library was used to train and inference an iceberg classification on an Intel® Xeon® Gold processor. The iceberg dataset was taken from Kaggle, and the approach was to train the model from scratch.



How Machine Learning and AI Could Improve MRIs

Doctors commonly use MRI (magnetic resonance imaging) scans to see parts of the body that aren’t easily visible through methods like X-rays or ultrasounds.
Then, it becomes easier to make diagnoses or perform examinations for other reasons.
Now, it’s possible that machine learning and artificial intelligence (AI) could make MRIs even more useful than before — here are four ways how.
1. Producing High-Quality Images With Less DataAlthough MRIs are undeniably beneficial for the people who need them, they can take up to an hour depending on the part of the body being imaged.
Moreover, individuals must stay completely still during that time while being inside a chamber that could cause claustrophobia. Add in the fact that patients might be in severe pain during an MRI, and it’s not hard to see why everyone would appreciate if the MRI process were shorter.
The FastMRI project includes insights from researchers at New York University (NYU) and Facebook who believe machine learning could produce adequate MRI images in less time than currently used methods.
Typically, an MRI machine takes images 2D images and stacks them to make 3D versions. But, scientists think it’s possible for machine learning to enhance less-detailed MRI images by intelligently filling in the gaps — eventually cutting down MRI scan times up to 90 percent.
The current goal is to run the MRI 10 times faster than usual but achieve an image quality level on par with conventional methods.
2. Using Predictive Algorithms to Minimize BreakdownsMRI machines feature numerous parts that must work together for best results, including a magnet that must stay cool. It makes a magnetic field strong enough to make protons in the body align with the field. A radiofrequency current gets sent through the patient to make the protons go against the magnetic field’s pull.
Then, the radiofrequency gets turned off, and sensors on the MRI equipment detect the energy emitted as the protons realign. The faster the realignment happens, the brighter the resultant MRI image.
Medical chillers are specially designed to endure load surges in addition to keeping the temperature constant during all other daily operation loads. High-quality parts are up to the task, but MRI machine failures can still occur.
Many companies in the manufacturing sector use AI tools that predict maintenance needs before total breakdowns happen. The associated executives understand even an equipment malfunction lasting a few minutes can disrupt operations and cost tens or even hundreds of thousands of dollars.
Similarly, a broken MRI machine becomes costly and inconvenient for hospitals. AI algorithms could make predictions about maintenance for proactive prevention.
3. Relying on Smarter MRIs to Aid in Better Decision MakingSome hospitals use MRI data before, during and after operations so surgeons can plan how to proceed or determine the extent of a tumor, for example. At Boston’s Brigham and Women’s Hospital, an MRI is inside the operating room as part of a larger imaging setup called the Advanced Multimodality Image Guided Operating Suite (AMIGO).
The staff at Brigham and Women’s also added a mass spectrometer to its AMIGO equipment. Machine learning analyzes data collected by that component and compares it to a decade worth of historical data about brain tumors while looking at a segmented MRI image.
Then, surgeons benefit from better insights about patients’ tumors. As a result, people may undergo fewer future operations because the first attempts are maximally successful.
Additionally, an Indian startup designed algorithms with deep learning and other advanced intelligent technologies.
The software containing those algorithms works with any MRI machine, CT scanner or X-ray machine. It screens for abnormalities and assesses their severity, giving results achieved much quicker than through older methods.
Also, some algorithms are trained on over a million images. That means this process could help physicians feel more confident when making diagnoses, thereby reducing potential mistakes and improper treatment plans.
4. Experimenting With AI to Assess the Extent of Brain DamageMedical technicians perform functional MRIs (fMRIs) to measure brainwave activity.
Researchers in China developed an AI algorithm they report works in conjunction with fMRIs to predict the likelihood of people with severe brain damage regaining consciousness.
It works by assessing the level of awareness a person has and then using that information and factors related to disorders of consciousness (DOCs) to give a suggested prognosis for recovery.
Emerging Technologies Make MRIs Even More WorthwhileThis brief overview of using machine learning and AI in MRI applications shows the efforts indicate plenty of promise.
As technology improves, the related advancements could make MRIs better than ever for patients and care providers alike.



How AI can Help Diagnose Alzheimer's

Technology is evolving at a rapid pace. In the 1990s, personal computers with Internet connections entered the homes of billions of people around the planet. In 2007, Apple introduced the world to the iPhone, and the landscape of cell phones changed dramatically in the wake of this announcement. In almost every area of society, technology has impacted our everyday lives. The world of medical AI news is no exception.
So, in a world where most of us carry tiny computers in our pockets and virtual reality is becoming a regular occurrence, how do you push the envelope? In medical AI news, these limits get advanced every day. And, with a new breakthrough, the medical world is about to be turned on its head.
The ConditionMedical AI news is becoming increasingly relevant today. Robots are roaming around in laboratories and solving math equations. With all this occurring, you may be stunned to learn there’s more going on with medical AI than you even realize.
Alzheimer’s disease is one of the most devastating illnesses a person can face. Alzheimer’s can strike anyone at any time, and gradually deteriorates a person’s memory. Patients in advanced stages of the disease can no longer care for themselves, or recognize their loved ones’ faces. Fear of completely losing your faculties is perhaps the main reason people rank Alzheimer’s second only to cancer as their most dreaded illness.
However, even with such an awful disease, scientific research has brought new hope. And this hope comes in the form of artificial intelligence.
Detection Is KeyAs with any disease, it’s not just what the condition does to a person that affects the person’s health. Many times, early diagnosis is the key to survival. Early detection is an area where the medical field has made impressive strides. After all, when doctors can diagnose a disease in its beginning stages, they have many more treatment options at their disposal.
The focus on detection is where the newest medical AI comes into play. Artificial intelligence now gives us a way to detect Alzheimer’s disease before it even really begins to affect the patient. It’s a tremendous find that has provided a bright ray of hope for anyone who rightly fears being afflicted with the disease.
Though doctors still haven’t discovered a cure for Alzheimer’s, this is still a breakthrough. The science behind it is fascinating and paves the way for even more artificial intelligence that can help us solve the puzzle of various diseases.
What It DoesTo arrive at a proper diagnosis with Alzheimer’s disease, doctors must run a series of tests such as MRIs or CT scans. These powerful scans can map out different areas of the brain to determine where different activities are taking place, as well as measure deterioration. However, even if they arrive at a positive diagnosis for Alzheimer’s disease, doctors may not be able to precisely see how the condition is developing.
Now, enter the AI, and everything changes.
How It WorksArtificial intelligence is now “smart” enough to look at a brain scan and predict which patient will go on to develop full-blown Alzheimer’s disease. Researchers taught the AI how to do this by feeding it many brain scans and telling the system which patients eventually developed Alzheimer’s.
Specifically, the artificial intelligence can pick out a protein called amyloid. This protein develops in both people who have cognitive impairment and Alzheimer’s sufferers. The AI uses a sophisticated algorithm to detect the tiny differences between the amyloid in Alzheimer’s patients and those with cognitive impairment.
The Future of Medical AI Is BrightAI has opened up a whole new realm of possibilities for doctors and medical researchers. And, as AI continues to improve, it is only going to advance the treatment options available for patients and their doctors. Cutting-edge technology has improved our lives immeasurably and made this an exciting time to be aliv


This AI Software is Helping Emergency Dispatchers Save Lives

Although ambulance crews are undoubtedly essential players in the process of saving the lives of people in medical distress, dispatchers are the ones who initially assess the situation and make crucial decisions about the urgency of the situation and what kind of help to send.
They consistently display excellence in their work and show calm demeanors many people can’t help but admire and appreciate. But, like almost everyone else, emergency dispatchers could benefit from additional insights — especially those that aren’t immediately evident over the phone. That’s where an AI software application called Corti comes into play.
A Tool to Detect Cardiac Arrest Cases Faster and More AccuratelyIf a person is having a heart attack, he or she may not be in the appropriate condition to go into substantial detail about symptoms. That’s also true if a child calls on behalf of an adult family member. That’s why Corti uses machine learning algorithms to analyze conversations.
It pays attention to phrases, as well as the tone of voice and breathing patterns. Then, Corti runs an automatic analysis and compares that phone call data to a collection of millions of other conversations to detect identifying patterns. After that, Corti notifies dispatchers if they are dealing with potential cardiac arrest cases.
By displaying information on dispatchers’ computer screens, Corti gives them real-time coaching to talk calls through how to check a person’s breathing and pulse and perform CPR if needed. In Copenhagen, the emergency services department there used Corti and found it could pick up on possible cardiac arrest cases faster and with greater accuracy than humans alone without the tool.
It Learns With Every CallPeople who work in emergency services to save lives routinely depend on high-tech, feature-filled interfaces that facilitate communications between various parties for coordination efforts. Adding Corti into the mix could be even more beneficial, especially for proper allocation of resources. That’s because many cardiac arrest cases aren’t always obvious to a dispatcher or person who has just arrived at the scene.

If a man was installing a shelf in a garage and his spouse heard him fall and ran to help, the initial assumption would likely be that the person had an accident related to the task. The reality could be that he had a heart attack.
It’s not possible for a dispatcher to think back to all his or her past calls and make judgments based on that collective data. However, Corti does something similar with the help of AI — and, just like humans, learns from experiences.
It actually gets smarter the more dispatchers use it. In other words, the program increases its understanding of emergency situations, allowing it to be an even more helpful complement to human dispatchers.
Ongoing developments indicate AI can positively impact health care by improving quality of life and reducing hospital admissions. Corti is a prime example of how AI can deliver need-to-know information to dispatchers when it matters most.
Could AI Increase the Heart Attack Survival Rate?Statistics collected in 2016 from the American Heart Association found the overall survival rate of people who have cardiac arrests outside hospitals was only 12 percent. A few years before that, the percentage was even lower.
Saving the life of someone who’s having a heart attack is truly a situation where every minute counts, too. Data shows for every minute in a cardiac arrest case that passes without intervention via CPR or an automated external defibrillator, the chances of survival decrease by 7 to 10 percent.
The successful uses of Corti so far understandably make people feel hopeful and wonder if AI could be a key component in helping the hundreds of thousands of people who suffer cardiac arrests each year survive them.
Besides taking dispatchers through each step of coaching a person to check for a pulse and breathing, and then perform CPR if necessary, Corti notices things dispatchers may miss and relies on the power of machine learning-driven knowled.


How AI Hopes to Transform Social Media

The relationship between AI and social media is growing deeper than ever. With an increased focus on big data processing and management, some of the most popular online communities are using informatics in new and exciting ways. Many are even utilizing the technology to bolster the productivity of their users.
AI Research by the Facebook Development TeamIt may come as a surprise, but Facebook is leading the pack in the race to combine AI and social media. They began dabbling in deep learning by the end of 2013 and they’ve been making steady progress ever since. Although much of their planning remains shrouded in secrecy, the AI-backed features we’ve already seen are incredibly groundbreaking and innovative.
Most of Facebook’s current research into AI and social media revolves around gaining better insight into why people share specific types of content across social media. According to Facebook’s CEO, Mark Zuckerberg, his site is “building systems that can recognize everything that’s in an image or a video.”


For now, Facebook users have to identify and tag friends in their digital photos manually. Depending on how many people are in a picture, or how many images you have on the site, this could be a painstaking process. But this could soon be a thing of the past thanks to AI.
Driving Productivity in Other WaysSocial media tends to come with a certain stigma attached. Although it’s often touted as a waste of time, modern social media has a lot to offer our current, tech-oriented society. It’s also had a direct impact on productivity both in and outside of the workplace – but it might not be the effect you expected.
Instead of hampering productivity, many studies indicate that social media – when used effectively – is beneficial to productivity. Businesses are using social media to connect with their customers, advertise products or provide helpful updates and informative posts. Their employees use it to connect with co-workers or exchange information.
But the average consumer enjoys most of the benefit. With AI powering fully automated and sophisticated chatbots, some of which are nearly identical to humans, it’s becoming commonplace to receive customer support at any hour of the day or night. Although many of the current chatbots serve company websites, they’re increasingly integrating into our online communities – and you might not even realize it.
Some experts think that chatbots might be the key to strengthening social media engagement and the individual customer experience. Sephora’s Kik chatbot, for example, frequently updates users with fashion and beauty tips. But this isn’t generic or random advice. Instead, the chatbot uses advanced AI analytics to target the specific interests of individual shoppers. Not only does this increase profitability on behalf of the brand, but it makes it easy for consumers to find the exact products or services they need.
The popular social media portal known as LinkedIn, which focuses on professional networks instead of personal relationships, utilizes AI to analyze a job applicant’s work history, identify their strengths and weaknesses and highlight opportunities that pertain to their specific skills. Other websites use similar, AI-powered tools to connect jobseekers with relevant positions.
The Future of Smart Social MediaAs useful as all of these innovations are, they’re just the beginning of the AI revolution. Modern technology has already resulted in smart cell phones, home appliances and automobiles, and these inventions are growing more intelligent every day. When you combine this self-aware and self-learning technology with the accessibility and popularity of social media, it’s easy to see the potential for explosive growth centered on AI within the coming days, weeks and months.


Paige.AI Combats cancer With AI And Computational pathology

For decades pathologists have rendered their cancer diagnoses by performing a biopsy and examining a patient’s tumour sample under a microscope. Now, an increasing number of top-tier pathologists are adopting artificial intelligence techniques to improve their cancer diagnoses.

Paige.AI is a New York-based startup that fights cancer with AI. Launched last month, the company has an exclusive data license with the Memorial Sloan Kettering Cancer Center (MSK) — the largest cancer research institute in the US — which has a dataset of 25 million pathology cancer images (“slides”).

Typically, a pathologist must invest a significant amount of time examining a patient’s numerous tumour slides, each of which could be 10+ gigapixels when digitized at 40X magnification. Even the best pathologists can make a misdiagnosis, and it is not uncommon for professionals to disagree on diagnoses.
This is why computational pathology for cancer research has gained traction over the last ten years or so. The technology incorporates massive amounts of data, including pathology, radiology, clinical, molecular and lab tests; a computational model based on machine learning algorithms; and a visualized presentation interface that is understandable for both pathologists and patients.
“Computational pathology solutions will help streamline workflows in the future by screening situations that do not require a pathologist review,” said Jeroen van der Laak, Associate Professor at Radboud University Medical Center, in an interview with Philips Healthcare.

Dr. Thomas Fuchs is the Director of Computational Pathology at MSK and an early pioneer in the theoretical study of computational pathology, He has many years of experience in the development and application of advanced machine learning and computer vision techniques for tackling large-scale computational pathology challenges.
Last month Dr. Fuchs assumed an additional role as Founder and CEO of Paige.AI. He told Synced he believed the time was right to build Paige.AI because the requirements are all in place: scanners can deliver digital images with quality comparable to what pathologists see under the microscope; cancer centres scan some 40,000 pathology slides each month; and deep learning algorithms are well-suited for large-scale data.

Paige.AI’s technology is built on machine learning algorithms trained at petabyte-scale from tens of thousands of digital slides. Three models are utilized to solve different problems: convolutional neural networks for tasks such as image classification and segmentation, recurrent neural networks for information extraction from pathology reports, and generative adversarial networks to learn the underlying distribution of the unlabeled image data and to embed histology images in lower dimensional feature spaces.
Tech giants believe their frontier machine learning algorithms have huge a potential to revamp conventional diagnostic methodologies in the healthcare market, increasing accuracy and reducing costs. IBM has been using slides to train deep neural networks to detect tumours since 2016.
Google, meanwhile, has released research on how deep learning can be applied to computational pathology by creating an automated detection algorithm to improve pathologists’ workflow. Google successfully produced a tumor probability prediction heat map algorithm whose localization score (FROC) reached 89 percent, significantly outperforming pathologists’ average score of 73 percent.
“Companies like Microsoft and IBM are doing pathology, and in general, it is good for the whole field,” says Dr. Fuchs, who also warned that tech companies unfamiliar with the healthcare sector might have a hard time. “You have to really understand the variety of workflows and the community, and where and how AI can help. Besides, as far as I know, all previous papers published were based on a very tiny data set. Increasing the dataset from a few hundred images to hundreds of thousands of images can make a huge difference.”
In the short term, Paige.AI will provide pathologists with it’s “AI Module” application suite, equipped with a dedicated physical slide viewer that can integrate with any microscope. The AI module targets prostate, breast and lung cancers and can perform tasks such as cancer detection, quantification of tumour percentages, and survival analysis.
Paige.AI has already rolled out its product institution-wide at MSK, and aims to deliver disease-specific modules to pathologists later this year.

Paige.AI’s forte in algorithms and access to large-scale data attracted interest from Breyer Capital, which led a US$25 million Series A Funding Round for the company. Founder and CEO of Breyer Capital Jim Breyer, a venture capitalist renowned for his smart investments — most notably Facebook — wrote in a Medium blog, “Paige.AI is poised to become a powerhouse in computational pathology and an undisputed leader among thousands of healthcare AI competitors.”
Paige.AI certainly does not intend to limit its output to slide viewers — the company aspires to reshape the entire diagnosis and treatment paradigm. “With Paige.AI, we can, for example, based on hundreds of thousands of slides, come up with a better grade because you can actually correlate so many patients with the outcomes. Then we compute that correlation, and of course, change how you grade patients and how and which medications are prescribed,” says Dr. Fuchs.
Although the road ahead for Paige.AI is bound to be challenging, especially as the company is still at a very early stage in its development, Dr. Fuchs is determined to raise his company to the forefront in AI implementation in healthcare, and their research is likely to spark further technological breakthroughs for computational pathology.


Google: Our Artificial Intelligence To Fight Pedophilia On The Web

The Ai can detect child pornographic images with an effectiveness of more than 700% compared to the previous ones

An algorithm to combat pedophilia on the web. Google has announced a new system based on artificial intelligence that can analyze large databases of images to find child pornography.

The system, based on a 'neural network', will help to identify the material that is exchanged online to track down those responsible for the abuse. "The identification of new images implies that children who are abused today are more easily identifiable to be protected - explains Nikola Todorovic, a Google engineer, in the post - We provide this tool free of charge to NGOs and our industrial partners through our Content Safety Api, a set of tools to increase the ability to analyze web content with the help of as few people as possible ".
The work on computer tools against pedophilia in the network, recalls the post, began in the early 2000s, and the latter algorithm seems to have an effectiveness of more than 700% in finding the images compared to those put in the field previously .

Since last year the London Police has been working on an artificial intelligence that is able to identify images of abuse, first to speed up the discovery on the Internet and then to reduce psychological trauma for investigators who have to manually check folders of often disturbing images. The Google Ai also allows you to find content that previously had not been considered as material containing child abuse.



How to solve the Memory Challenges of Deep Neural Networks

Memory is one of the biggest challenges in deep neural networks (DNNs) today. Researchers are struggling with the limited memory bandwidth of the DRAM devices that have to be used by today's systems to store the huge amounts of weights and activations in DNNs. DRAM capacity appears to be a limitation too. But these challenges are not quite as they seem.

Computer architectures have developed with processor chips specialised for serial processing and DRAMs optimised for high density memory. The interface between these two devices is a major bottleneck that introduces latency and bandwidth limitations and adds a considerable overhead in power consumption.
Although we do not yet have a complete understanding of human brains and how they work, it is generally understood that there is no large, separate memory store. The long- and short-term memory function in human brains is thought to be embedded in the neuron/synapse structure. Even simple organisms such as the C.Elgan worm, with a neural structure made up of just over 300 neurons, has some basic memory functions of this sort.
Building memory into conventional processors is one way of getting around the memory bottleneck problem by opening huge memory bandwidth at much lower power consumption. However, memory on-chip is area expensive and it wouldn't be possible to add on the large amounts of memory currently attached to the CPU and GPU processors currently used to train and deploy DNNs.
Why do we need such large attached memory storage with CPU and GPU-powered deep learning systems when our brains appear to work well without it?

WHY DO DEEP NEURAL NETWORKS NEED SO MUCH MEMORY?

Memory in neural networks is required to store input data, weight parameters and activations as an input propagates through the network. In training, activations from a forward pass must be retained until they can be used to calculate the error gradients in the backwards pass. As an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass. If you use a 32-bit floating-point value to store each weight and activation this would give a total storage requirement of 168 MB. We could halve or even quarter this storage requirement by using a lower precision value to store these weights and activations

A greater memory challenge arises from GPUs' reliance on data being laid out as dense vectors so they can fill very wide single instruction multiple data (SIMD) compute engines, which they use to achieve high compute density. CPUs use similar wide vector units to deliver high-performance arithmetic. In GPUs the vector paths are typically 1024 bits wide, so GPUs using 32-bit floating-point data typically parallelise the training data up into a mini-batch of 32 samples, to create 1024-bit-wide data vectors. This mini-batch approach to synthesizing vector parallelism multiplies the number of activations by a factor of 32, growing the local storage requirement to over 2 GB.
GPUs and other machines designed for matrix algebra also suffer another memory multiplier on either the weights and activations of a neural network. GPUs cannot efficiently execute directly the small convolutions used in deep neural networks. So a transformation called 'lowering' is used to convert those convolutions into matrix-matrix multiplications (GEMMs) which GPUs can execute efficiently. Lowering cures execution inefficiency, but at the cost of multiplying either the activation storage or the weight storage by the number of elements in the convolution mask, typically a factor of 9 (3x3 convolution masks). Finally, additional memory is also required to store the input data, temporary values and the program's instructions. Measuring the memory use of ResNet-50 training with a mini-batch of 32 on a typical high performance GPU shows that it needs over 7.5 GB of local DRAM.
You might think that by using lower-precision compute you could reduce this large memory requirement, but that is not the case for a SIMD machine like a GPU. If you switch to half-precision data values for weights and activations, with a mini-batch of 32, you would only fill half of the SIMD vector width, wasting half of the available compute. To compensate, when you switch from full precision to half precision on a GPU, you also need to double the mini-batch size to induce enough data parallelism to use all the available compute. So switching to lower-precision weights and activations on a GPU still requires over 7.5 GB of local DRAM storage.
You cannot keep such large amounts of storage data on the GPU processor. In fact, many high performance GPU processors have only 1 KB of memory associated with each of the processor cores that can be read fast enough to saturate the floating-point datapath. This means that at each layer of the DNN, you need to save the state to external DRAM, load up the next layer of the network and then reload the data to the system. As a result, the already bandwidth and latency constrained off-chip memory interface suffers the additional burden of constantly reloading weights as well as saving and retrieving activations. This significantly slows down the training time while increasing power consumption.

THREE APPROACHES FOR MEMORY-SAVING TECHNIQUES

Although large mini-batches improve computational efficiency by providing parallelism, research shows that large mini-batches lead to networks with a poorer ability to generalise and that take longer to train. Besides, machine learning model graphs already expose enormous parallelism. True graph machines such as Graphcore's IPU don't need large mini-batches for efficient execution, and they can execute convolutions without the memory bloat of lowering to GEMMs. So IPUs have a very much smaller memory footprint than GPUs, small enough to fit on the processing chip even for large networks. The efficiency and performance gains from doing this are huge.
Decades of work on compilers for sequential programming languages means there are several techniques to reduce memory further. First, operations such as activation functions can be performed 'in-place' allowing the input data to be overwritten directly by the output. In this way the memory state can be reused. Secondly, memory can be reused by analysing the data dependencies between operations in a network and allocating the same memory to operations that do not use it concurrently.
This second approach is particularly effective when the entire neural network can be analysed at compile-time to create a fixed allocation of memory, since the runtime overheads of memory management reduce to almost zero. The combination of these techniques has been shown to reduce memory in neural networks by a factor of two to three. These optimisation techniques on a parallel program are analogous to the dataflow analysis in a sequential program graph to allow the reuse of registers and stack memory, with their relatively higher efficiency compared to dynamic memory allocation routines.

How is Artificial Intelligence (AI) Impacting Cyber Security ?

Traditional methods to detect malware and cyber security threats are failing. Cyber criminals are constantly coming up with new ways to bypass firewalls and become a threat to an organization’s security. The only way to fight it out is to be more prepared and smarter than the hackers.

As per 2017 Cybersecurity Trends Report, cyber security budgets are set to increase as security professionals anticipate more attacks in the next 12 months. It is indicated that organizations will increase their security spend on cloud infrastructure (33%), training / education (23%) and mobile devices (23%).
Cyber-attacks are getting more complex and smarter
If you think 2016 was bad for cyber-attacks, 2017 proved to be worse. The malware, DDoS, and other types of cyber threats are becoming more serious. Imagine in 2016 alone, 357 million malware were detected and a number of them had left businesses crippled, scouting for better data security. The use of IoT has increased the threat of cyber-attacks. The security infrastructure on these devices will determine how secure the devices are and if there are any weak links in the system, the threat of malware attack will loom till then.

The threat to a business increases if it has data flowing from different sources. The data is constantly exposed to more malware, bots and DDoS attacks. Network systems security is also a matter of concern for businesses. There are threats to networks that have become more common, hacks have become more complex, and they are no longer just a concern for large organizations. The traffic on the network needs to consistently monitored, inspected and co-related.
How is Artificial Intelligence being used?In order to detect unusual behavior on a network, there are newer security technologies that are using Artificial Intelligence programs. AI uses machine learning to detect similarities and differences within a data set and report any anomalies. Machine learning is a part of AI that can help to recognize patterns in data and predict effects based on past experience and data. AI systems, in most of the cases, use machine learning technology to generate results that replicate human functioning. As per an article published in Forbes titled Separating Fact From Fiction: The Role Of Artificial Intelligence In Cybersecurity, ML, coupled with application isolation, prevents the downside of malware execution — isolation eliminates the breach, ensures no data is compromised and that malware does not move laterally onto the network.