Happy New Year!
Welcome to the first edition of ‘Our Week in Digital’ of 2020. We hope you have all had an enjoyable holiday and a peaceful and restful break.
Here at Ignite Digital, we are back with a bang and ready for Q1!
We have started 2020 as we mean to go on.
We couldn’t start our first edition without updating you on what’s been going on with us. Our big news is that we have a new member of the team. We have welcomed Ollie, who has joined the ranks as a lead technology recruiter. Ollie brings with him a wealth of tech recruitment experience. We are very excited to have him on board as we embark on several large scale recruitment projects! Exciting times!
Looking a further afield, we have kicked off the new year and the first edition of 2020 by reporting on some of our top stories from the world of digital and tech.
Unsurprisingly, AI features heavily…a tech trend we anticipate will dominate 2020.
Read on for more!!
Samsung’s Artificial Human project ‘Neon’ creates lifelike AI
Kicking off our first edition is a story from tech giants, Samsung. According to leaked footage of the firm’s secretive Neon project, Japanese tech giants, Samsung have developed Artificial Intelligence that is almost indistinguishable from real humans.
Neon is borne of a partnership between Samsung Technology and Advanced Research Lab in the United States. It is the latest artificial intelligence platform to be created by Samsung. From the footage, it also appears to be the most advanced. Indeed, it is the most human-like AI ever created.
Pranav Mistry worked on the project. He revealed that the technology is unlike anything seen before and can “autonomously create new expressions, new movements, new dialogue (even in Hindi), completely different from the original captured data”.
Although undeniably unique, some were left unconvinced by the realism of the avatar. One Twitter user has described them as “creepy and deformed” while another believes them to be “weird”.
Other AI developments
Other tech giants have embarked on similar AI journeys. In March last year, Facebook announced its Codec Avatars project which aims to allow people to create realistic versions of themselves using 3D capture technology.
The Facebook idea is to use these avatars in a virtual world. Facebook users can connect with friends and family in a three-dimensional social network. This concept is referred to as “social presence”. It is similar to the fictional worlds depicted within the pages of the sci-fi novel, “Ready Player One”.
It is claimed by the social network giant that these avatars would help make social connections in VR as natural and as commonplace as those in the real world. These experiences can be experienced through the use of hardware built by Facebook’s VR subsidiary, Oculus.
Samsung used the CES conference in Las Vegas as the platform to officially unveil project Neon and their avatar.
At the current time, it has not been revealed what Samsung plans to use the technology for. Some have speculated though, that it could be used for online customer service. Other suggestions are that it could act as virtual receptionists at hotels and other establishments. Other commentators have suggested that it may even be used to explore virtual reality settings, similar to how other firms (such as Facebook) have used their incarnations of lifelike avatars.
Quantum computing adopted by carmakers and airlines in world first.
Another big story making it into our first edition of 2020 is about quantum computing. Quantum computing isn’t just something making it into this first edition article. We have seen several stories on it in 2019. Super powerful quantum computers have huge possibilities. They have the potential to revolutionise everything from manufacturing to medical research. However, up to this point, they have not found any commercial uses.
This looks set to change, however. Transport heavyweights, Daimler and Delta Air Lines have teamed up with IBM to develop real-world applications for the developing technology.
Jamie Thomas, general manager of strategy and development for IBM has said that Daimler and Delta are in good company. More than 100 clients are already experimenting with commercial quantum computing. In so doing, they are looking to solve problems such as “risk analytics and option pricing, advanced battery materials and structures, manufacturing optimisation, fraud detection, chemical research, logistics and more”.
For Daimler and Delta, the possibilities are massive. For the former, it could result in the development of electric vehicle batteries capable of travelling more than 1,000km without needing to recharge. Indeed, improving the capacity and charge speed of batteries has been described as the “Achilles Heel of electric vehicles”.
While for Delta, it could revolutionise route scheduling. Of the IBM partnership, Delta CIO Rahul Samant has remarked that it would allow the airline to “draw the blueprints” for quantum computers within the industry.
How is it done?
Such strides are made possible by combining the unique properties of quantum physics with computer science. The result is a level of processing power that is exponentially more powerful than traditional computers.
Realising this promise is still some way off though. To put its journey into perspective, ‘Quantum Computing’ is a concept that was first proposed back in the early 1980s. It took until 2019 for one of these machines to perform a calculation that would be impossible using a classical computer.
This milestone has since been referred to as quantum supremacy. It was first achieved by researchers from Google, who have heralded this landmark as “a much-anticipated computing paradigm”; one that had finally demonstrated the technology’s capabilities.
Jeanette Garcia, a senior manager in the quantum applications, algorithms and theory team at IBM Research is of the belief that although these advances are still in the early stages the possibilities are massive.
“As they improve, the machines will become exponentially more powerful. So while we haven’t yet achieved quantum advantage, this type of research is the foundational work that will eventually get us there.”
Arduino introduces low-code way to design IoT hardware.
Our next story making it into the first edition of 2020 was another of our 2020 trends to watch, IoT. Arduino is the designer of an open-source microcontroller and has launched a new low-code solution for product creators designing hardware for the internet of things (IoT).
The company wants to simplify the creation of modular hardware to power the smart and connected objects that we use every day. Its new tool has the potential to design, build, measure, and explore various prototypes in just one day.
Arduino made the announcements at CES 2020 earlier this week. These innovations mean that companies can do all the work themselves without expensive consultations or lengthy integration projects.
Arduino has drawn on this experience in frictionless design to enable enterprises to quickly and securely connect remote sensors to business logic within one simple IoT application development platform.
This development platform is supported by Arduino hardware. It already features on-board crypto-authentication chips and certified comms modules spanning Wi-Fi, BLE, LoRa, LTE Cat-M, and NB-IoT. It is also equipped with powerful 32-bit ARM micro-controllers, so they are already set up for any low-power IoT deployment.
Many large companies are among millions of users and thousands of companies across the world who already use Arduino as an innovation platform. Existing partnerships of note include Amazon, Arm, Bosch, Intel, Google, Microsoft, and Samsung.
The reach of IoT
It is not only big-hitting companies that recognise the value of IoT. Many SME businesses also wish to harness the potential of connectivity but lack the specialist engineering resources or budget required for conventional IoT projects.
These smaller businesses are increasingly using Arduino as a way to simplify and accelerate their IoT deployments.
Also at CES 2020, Arduino announced the powerful new low-power Arduino Portenta Family. Designed for demanding industrial applications, AI edge processing, and robotics, it features a new standard for open high-density interconnect to support advanced peripherals. The first member of the family is the Arduino Portenta H7 module — a dual-core ARM Cortex-M7 and Cortex-M4 running at 480MHz and 240MHz, respectively.
Established Arduino users are positive about these developments.
Charlene Marini, vice president of strategy for IoT services at Arm, said in a statement that the solution will help Arduino developers securely and easily develop IoT devices, taking them from prototype to production quickly.
General availability of the new Arduino solution is scheduled for as early as next month.
Cooper Lake will deliver a 60% increase in AI inferencing and training performance.
The fourth story making it into 2020’s first edition is another about AI. Intel has used the CES 2020 platform to update the tech community on its AI and ML hardware acceleration efforts.
The finer details were a little scarce. However, platforms group executive vice president Navin Shenoy did preview the performance improvement that will arrive with the chip maker’s third-generation Xeon Scalable processor family, code-named Cooper Lake.
He revealed that the 14 nanometer Cooper Lake will be available in the first half of this year. It will deliver up to a 60% increase in both AI inferencing and training performance.
This is a marked improvement. Intel achieved a 30 times improvement in deep learning inferencing performance between 2017 and 2019.
This improvement is in part down to DL Boost, which encompasses a range of x86 technologies designed to accelerate AI vision, speech, language, generative, and recommendation workloads.
Cooper Lake is a move on again. It features up to 56 processor cores per socket, or twice the processor core count of Intel’s second-gen Scalable Xeon chips. They’ll also have higher memory bandwidth, higher AI inference, and training performance at a lower power envelope, and platform compatibility with the upcoming 10-nanometer Ice Lake processor.
The Intel numbers point to the fact that its future lies in AI. The Santa Clara company’s AI chip segments bagged $3.5 billion in revenue last year and Intel expects the market opportunity to grow to $10 billion by 2022. Putting this into perspective, AI chip revenues were up from $1 billion a year in 2017. Intel anticipates the AI silicon market will be greater than $25 billion by 2024.
Intel’s acquisitions portfolio also screams AI. At the tail end of last year, Intel announced they paid an estimated $2billion for Habana Labs. An Israeli developer of programmable AI and machine learning accelerators for cloud datacenters.
Prior to that, in September 2016, Intel purchased the San Mateo-based Movidius. Movidius designs specialized low-power processor chips for computer vision. Back in 2015 Intel also bought field-programmable gate array (FPGA) manufacturer Altera. A year later they also acquired Nervana, and in so doing filled out its hardware platform offerings and set the stage for an entirely new generation of AI accelerator chipsets.
In 2018 Intel snatched up Vertex.AI a startup developing a platform-agnostic AI model suite.
At the end of last year, we published a blog post hinting at what new and emerging tech trends may continue to disrupt through 2020….AI and ML were leading this list. It is not surprising then, that one of the biggest names in computing is throwing its significant hat into the ring.
A regular in Our Week in Digital, Amazon has also claimed its spot in our first edition of 2020. Amazon operates in 14 countries worldwide; 9 of which are eligible for its Prime yearly subscription service. Naturally, the worlds largest online retailer has a real desire to make available its shopping experience in a multitude of languages. Especially within those regions in which customers who speak different dialects are searching for the same products.
Chasing an efficient means of translating multiple languages, researchers from Amazon have developed a multitask shopping model. One in which the functions overlap across tasks and tend to reinforce each other.
They believe that their AI which has been trained on data from several different languages at once delivered better results using any of those languages.
This improvement is down to the fact that a corpus in one language is able to fill gaps in that of another language. As one Amazon applied scientist explains it; phrases easily confused in French might not look much like their equivalents in German. Therefore multilingual training could help sharpen the distinctions among several product queries.
How it’s trained
To train the system, the team began by picking one of its input languages at random and “teaching” it to classify query-product pairs in just that language. Next, they trained it end to end over a series of epochs — complete presentation of the data set — on annotated sample queries in each of its input languages.
Lastly, the project completed an alignment phase and ensured that the outputs tailored to different languages shared a representational space. It did this by minimizing the distance between encodings of product titles and queries.
Amazon says that in experiments involving 10 different bilingual models they achieved “strong results” in as few as 15 or 20 epochs. The experiments included five models in which each language was paired with the other four, 10 trilingual models, and one pentalingual model.
According to a common performance measure in AI testing that factors in false-positive and false-negative rates, a multilingual model trained on both French and German outperformed a monolingual French model by 11% and a monolingual German model by 5%.
Similarly, a model trained on five languages (including French and German) outperformed the French model by 24% and the German model by 19%.
A spokesperson from Amazon has confirmed the success of the system;
“The results suggest that multilingual models should deliver more consistently satisfying shopping results to our customers”.
This isn’t the end though. In more ongoing work, the retail giants are continuing to explore the power of multitask learning in order to improve the customer shopping journey.
We hope you enjoyed our first edition of 2020. It is great to see many of our 2020 tech trend predictions already hitting the headlines! Our first edition of 2020 has been packed with so many exciting stories that we are excited to see where the year will take us.
Are there any stories that made it into our first edition that caught your eye? Leave us a comment below!