We celebrating AI in this edition of Our Week in Digital! In a week when the founding father of computer science and artificial intelligence, Alan Turing has been revealed as the face of the new £50 note we thought it would be apt! Quite apart from his landmark code-breaking efforts during WW2, this brilliant mathematician played a significant role in the development of early computers and still has an enormous impact on our industry today.
So, Mr Turing…this edition is for you!
ContractPodAI scores $55m for its AI-Powered contract management software.
ContractPodAI has secured $55million in series B funding. This London based startup develops AI-powered contract lifecycle management software; offering an end to end solution spanning the three main aspects of contract management.
Aimed predominantly at corporate customers, its offering claims to streamline the contract management process, thus reducing the burden on corporate in-house legal teams. It handles contract generation, contract repository, and third-party review.
ContractPodAi co-founder and CEO Sarvarth Misra remarks that;
“The legal profession has been historically behind the curve in technology adoption and our objective here is to support the digital transformation of legal departments via our contract management platform”.
The startup already has offices across the globe; San Francisco, New York, Glasgow and Mumbai, in addition to its London HQ. Armed with new capital, ContractPodAi says it plans to “significantly” scale up its product development, sales, and customer success teams globally.
This capital injection will allow ContractPodAi to make strides towards achieving their objective; to provide a one contract management ecosystem which covers all aspects of contract management functionality.
Although ContractPodAI are not the only providers in the field, Misra goes on to say that it is the “fixed, transparent pricing and ability to provide full implementation as part of the annual SaaS” which differentiates them from the rest of the providers.
AI sniffs out lost dogs
Facial recognition software is nothing new to a Chinese startup, Megvii. They already supply their AI Facial Recognition software to the Chinese government’s surveillance program. This week, Megvii have announced they have expanded their software beyond humans; Megvii’s new AI program is trained to recognise dogs by their nose prints. Much like the fingerprint of a human is unique, so are the nose prints of dogs…no two are the same.
Dog owners can use the app to register their dog by scanning the dog’s snout simply by using the camera on their smartphone. The app will ask for images of the nose from several different angles which it can then use to identify each individual creature. So far, Megvii claim that their tech has successfully reunited 15,000 pets with their owners.
Animal facial recognition has become more widespread over recent years. The concept has been used by wildlife conservation researchers, and more recently to locate missing pets as is the case with US app, Finding Rover.
However, Megvii’s product aims to go one step further. They intend to extend their work with the Chinese Government, and use their AI to monitor what they call “uncivilised dog keeping”. Through the app, they hope to be able to issue fines to irresponsible pet owners who don’t pick up after their dogs for example, or walk them without leashes in public spaces.
AI solves Rubik’s Cube in 1 second.
Researchers at the University of California have created AI that can solve a Rubik’s Cube in just over 1 second! For years, we have struggled to fathom the 3D logic puzzle which has confounded us since it’s 1974 invention.
The algorithm, DeepCubeA tackled the problem in a very different way to those strategies employed by humans, states creator Professor Pierre Baldi. He goes on to say that the tech “learned on its own”, and that the AI has a form of reasoning that is “completely different from a human’s”.
The study was published in Nature Machine Intelligence and saw the algorithm be given 10 billion different combinations of the puzzle. The target was to decode them all within 30 moves. The AI was then tested on 1,000 of these and managed to solve all of them; the shortest path to the solution was found in about 60% of the time taken by humans. Humans able to solve the problem quickly, generally are able to do so in about 50 moves. DeepCubeA was able to do so using an average of just 28.
Although brilliant, DeepCubeA is not the first or the speediest non-human to solve the puzzle. This title goes to a system devised at the Massachusetts Institute of Technology. The ‘min2phase’ algorithm as it was so-called completed the riddle three times faster. Not only this, in 2018 researchers built a robot who was able to complete the task in an incredible 0.38 seconds.
Although this is a fun test, the tech used on the DeepCubeA project could hold real-world solutions to real-world problems. Neither of the aforementioned projects used a neural network which is able to mimic how the human brain works. Creating a system that teaches itself to complete the challenge has been heralded as the first step towards creating an AI that is able to move beyond games; one that holds the human abilities to to “think, reason, plan and make decisions,” said Prof Baldi.
The question, he believes is “How do we create advanced AI that is smarter, more robust and capable of reasoning, understanding and planning?” Although this may be some time coming, he believes his project takes a “hefty step” toward that.
Musk’s, Neuralink looks to begin outfitting human brains with faster input and output as soon as 2020.
Led by Elon Musk, startup Neuralink is working on technology that’s based around ‘threads’ which can be implanted inside the human brain. It carries the possibility of creating much less damage to the surrounding brain tissue than what is currently used for today’s brain-computer interfaces.
Using this particular tech, the company hopes to tackle and solve some of the most pressing issues that affect patients with brain disorders today. The goal is medical, with the plan being to use a Neuralink created robot to place the tiny threads deep within a person’s brain tissue. On implantation, these ‘threads’ will be capable of performing both read and write operations at very high data volume.
This all sounds very futuristic, and to some extent it is! In a briefing carried out earlier this week, Neuralink’s scientists highlighted that there is still some way to go before it can get anywhere near offering commercial service. So why go public now if it’s such a long way off? The reason lies in the necessity to work in collaboration with the academic and research communities. Working undercover would not facilitate such an open arena in which to research and publish papers.
Neuralink co-founder and president, Max Hodak is hopeful that the tech could provide revolutionary medical solutions in the near future. He cites potential applications as enabling amputees to regain mobility via the use of prosthetics for example. Other benefits may include reversing vision, hearing or other sensory deficiencies as distinct possibilities. Neuralink are hoping to begin work with human test subjects as early as next year, including possible collaborations with neurosurgeons at Stanford and other notable academic institutions.
Long term, Musk identifies the goal to be identifying a way to “achieve a sort of symbiosis with artificial intelligence.”
Whether they get there or not, just the realisation of the medical objectives would be life-changing in the extreme for many individuals for whom daily life is adversely affected by both brain and physical disorders.
Across the world, millions of people are affected by atypical speech. As such, this poses significant challenges for accessibility engineers developing AI-driven speech recognition and text-to-speech synthesis products. During development, these engineers are having to accommodate a range of impairments for which limited data sets are available.
Google believe they have a solution to this! Parrotron is Google’s ongoing research initiative which aims to help those with affected speech become better understood. As a part of this, Google’s scientists are investigating ways to minimize word substitution, deletion, and insertion errors in speech models.
Parrotron leverages an end-to-end AI system trained to convert speech from a person with an impediment directly into “fluent” synthesized speech. It disregards visual cues, such as lip movements and focuses solely upon speech signals. The AI is trained in two phases using parallel corpora of input/output speech pairs.
Parrotron’s reveal comes after Google unveiled three separate accessibility efforts at its I/O 2019 developer conference; Project Euphonia, which aims to help people with speech impairments, Live Relay, which is designed to assist deaf users; and Project Diva, which gives people some independence and autonomy via Google Assistant.
Through the research, the team at Google have been able to report early success. They have revealed that their work has led to significant quality improvements. In some cases, Parrotron’s output reduced the word error rate of Google’s automatic speech recognition from 89% to 32%.
In the future, the team at Google are hoping to leverage this success. They are looking to move from a combination of independently tuned AI models to a single one, which they expect will result in “significant” performance improvements and greatly simplify Parrotron’s architecture.
All these AI advances have led us to question where this disruptive trend will go next? What are your predictions? We’d love to hear your views. As always, please leave your thoughts below!
Our weekly round up would be incomplete without some Big Tech news…
EU opens formal competition investigation into Amazon over use of merchant data.
“I have… decided to take a very close look at Amazon’s business practices and its dual role as marketplace and retailer (and) to assess its compliance with EU competition rules,” says Margrethe Vestager, the EU’s antitrust commissioner.
It has been revealed this week that the European Watchdog is launching a major antitrust investigation into Amazon amid suspicions the company mishandles merchant data.
As it stands, the investigation would examine two issues. Firstly, Amazon’s standard contracts with marketplace sellers and secondly the role that data plays in choosing the winners of the “buy box”, which allows buyers to add items from a specific retailer into their shopping carts.
Ms Vestager launched an early probe into Amazon last year, turning her attention away from search giants, Google. In fact, she has led a one-woman tirade against Silicon Valley’s big hitters. During her term in office, she has handed Google €8.2bn (£7.4bn) in fines, scrutinised Apple over taxes and hit Facebook for its data practices.
The last time that Amazon found itself in the EU competition authorities’ crosshairs was in 2014, at which time it paid a fine after reaching a settlement over contracts struck with publishers for electronic books for its Kindle e-readers.
This latest investigation could lead to fines or restrictions in the way Amazon is able to operate. Although Amazon have indicated that they plan to cooperate fully with the investigation, they deny any wrongdoing.
A company lawyer told a US Congressional committee that it does not favour Amazon-branded products in search results, and claimed it “merely makes decisions based on what customers want.” During the hearing in which Amazon were held to account, he claims that;
“We apply the same criteria whether you’re a third-party seller or Amazon”.
Amazon has released thousands of its own products on its website in recent years, yet they have clearly stated that they do not use the sales data from independent merchants in order to compete with them.
Big tech companies have the potential to monopolise the market. This and their alleged misuse of data has been called into question frequently over recent months and years. It is undeniable that lawmakers and watchdogs have them set in their sights. Could this latest EU investigation result in yet another fine issued to a corporation with pockets deep enough to consider it small change? Indeed, you have to begin to question whether such fiscal penalties are in any way effective?