Skip to content

Tachyum expands Prodigy’s value with licensable TPU core for AI in IoT, edge devices

September 18, 2023
Tachyum expands Prodigy’s value with licensable TPU core for AI in IoT, edge devices

Tachyum announced that it is expanding Tachyum Prodigy value proposition by offering its Tachyum TPU (Tachyum Processing Unit) intellectual property as a licensable core. This will enable developers to utilise intelligent AI (artificial intelligence) in IoT (internet of things) and edge devices that are trained in datacentres. Tachyum’s Prodigy is a universal processor combining general purpose processors, high performance computing (HPC), artificial intelligence (AI), deep machine learning, explainable AI, bio AI and other AI disciplines with a single chip.

With the growth of AI chipset market for edge inference, Tachyum is looking to extend its proprietary Tachyum AI data type beyond datacentre by providing its IP (internet protocol) to outside developers. The main features of TPU inference and generative AI/ML (machine language) IP architecture include architectural transactional and cycle accurate simulators; tools and compilers support; and hardware licensable IP, including RTL (register transfer level) in Verilog, UVM (universal verification methodology) Testbench and synthesis constraints. Tachyum has 4b per weight working for AI training and 2b per weight as part of the proprietary Tachyum AI (TAI) data type, which will be announced later this year.

“Inference and generative AI is coming to almost every consumer product and we believe that licensing TPU is a key avenue for Tachyum to proliferate our world-leading AI into this marketplace for models trained on Tachyum’s Prodigy universal processor chip. As Tachyum is the only owner of the TPU trademark within the AI space, it is a valuable corporate asset to not only Tachyum but to all the vendors who respect that trademark and ensure that they properly license its use as part of their products.” says Radoslav Danilak, founder and CEO of Tachyum.

As a universal processor offering utility for all workloads, Prodigy-powered data centre servers can switch between computational domains (such as AI/ML, HPC (high performance computing), and cloud) on a single architecture. By eliminating the need for expensive dedicated AI hardware and increasing server utilisation, Prodigy reduces CAPEX (capital expenditures) and OPEX (operational expenditure) while delivering data centre performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4.5 times the performance of the high performing 86 times processors for cloud workloads, up to 3 times that of high performing GPU (graphics processing unit) for HPC, and 6 times for AI applications.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow