Neuromation Bi-annual Report

Image for post

As mentioned in our last update, these voluntary reports are now on a bi-annual cadence so the below will cover the first 6 months of 2020 as well as some more current information that may have become available prior to publication.

Going forward, we would like to use the opportunity provided by these reports to update our token holders primarily on product development, research, client work and expectations for the future. We would also like to briefly review our available services for business clients and token holders as offered by our “Neuromation Labs” business practice units.

The current update will focus primarily on our core product,, as since our last report our entire team has remained laser focused on the platform’s continued development, improvement and expansion. is our fully fledged MLOps Solution that combines flexible resource orchestration for ML workloads, both in the cloud and on-prem, with automated instance control and a natively integrated pipeline creation and management engine for the entire ML lifecycle (including data collection and preparation, experiment tracking, hyperparameter tuning, remote debugging, distributed training and model deployment and monitoring). allows data science and AI/ML teams to optimize their infrastructure costs, streamline infrastructure management and freely integrate with their choice of the leading open source and proprietary tools. accelerates development and deployment across the entire ML Lifecycle. Integrated collaboration tools, access control, and full support from our remote MLOps team round out the offering.

Product Update

Below, we would like to walk you through some of the newest feature additions that are continually improving our core offering.

First, we are happy to report that the updated Web UI for that we teased in our last product update is now fully live. This is a major update and improvement from its previous iteration.

New functionality added to the web UI includes the ability for jobs to be shared with teammates, as well as dramatically improved responsiveness and a more intuitive UX.

Role management in has also been improved and now allows managers to create and name project-related roles on the platform as well as to grant permissions and assign them to team members or withdraw them as needed. This allows for significantly easier management of all project-related entities.

We have also greatly improved the monitoring tools and reporting functionality for the platform, now allowing for analysis of resource consumption on the job, user and cluster levels. As part of this effort we expanded the ‘neuro show config’ CLI command for checking resource availability, allowing users to easily see how many workloads for jobs of each type can be run in the current cluster.

Our custom workflow and pipelining engine, called neuro-flow, now allows for almost any existing ML project to be run on the platform. This feature allows ML engineers to use either their own code or to grab useful code from Github or use the latest research as a baseline. For example, users can begin with a GitHub repository and then create volumes, upload the project to platform storage, build custom docker images and run training right on the platform with minimal configuration through handy short commands.

Since our last update, we have also made significant progress in our efforts to make a fully bilingual platform, with a focus on Russian language localization. The Web UI is now fully localized for Russian and all documentation has been translated as well. Our website at is also now fully bilingual.

Going forward, we have a detailed product roadmap for for the next several years. As of this writing, we are actively working on adding support for NVIDIA’s vGPU functionality, a technology that allows users to virtualize GPU resources in the cloud, so stay tuned for updates on that in our next report!

Company Update

For our current and interested customers and token holders looking for solutions to specific AI-driven problems, Neuromation offers three custom development and consulting units, known as Neuromation Labs. These consist of our MLOps LabSynthetic Data Lab and a third lab focusing on animal livestock and husbandry called the “Animal AI Lab”.

In our MLOps Lab, we leverage our Platform and trained MLOps specialists to provide complete custom AI pipelines with ongoing support; and for specific-AI driven problems we find applicable reference ML models and datasets and deploy them as easily modifiable recipes on your own instance of platform, so you can quickly launch into development.

Our Synthetic Data Lab is currently working with major mobile manufacturers on facial recognition as well as with the biggest robotics company in the world on indoor simulation environments. In February of this year, we also contributed to a paper together with Google Brain on learning to see transparent objects using synthetic data. Current capabilities include working with customers to develop synthetic data sets, improving facial recognition performance and environment simulation.

Our Animal AI Lab uses artificial intelligence and machine vision to facilitate monitoring and management of feedlots to optimize production, increase quality, reduce environmental impacts and improve animal well-being. Although work in this area has been underway internally for some time, the lab was recently awarded key patents (1,2) for implementation of AI technologies, which now provides us with the protection and sectoral positioning necessary to go out to the broader market in this area for the first time.

Neuromation is also pleased to announce that we concluded several high level partnerships during the period. We have concluded a partnership agreement with one of the largest telecom operators in Eastern Europe and will be providing them with a whitelabeled MLOps smart cloud solution for their subscriber base.

We also have an ongoing partnership with NVIDIA as part of their Inception Program and have concluded a partnership with global software engineering firm DataArt. was also named a board member of the largest (although still quite new) MLOps industry group, This group provides a forum for data scientists, ML engineers and Devops professionals to discuss their experiences and collaborate on best practices in MLOps.’s presence in this group puts us in daily communication with leading machine learning operations engineers in the field who are solving the unique challenges of building production AI/ML pipelines.

In response to the Covid-19 epidemic, has partnered with both AWS and Insilico Medicine to help find more effective medical treatments and to develop better policies for containment and prevention of the virus. Currently, teams using for this purpose are focused on: the generation of novel molecules and antifibrotic agents, the analysis of biomarkers and microbiomes to assist in predictive patient analytics, the use of geroprotectors to boost immune systems, development of CV for simple monitoring of social distancing and protective policies, and the creation of epidemiological models that more accurately predict contagion. We are providing the platform and computing power completely free of charge to data scientists, engineers and researchers working on ML-driven solutions to COVID. For more information, please see our COVID-19 Solution page here.

Finally, we would like to reiterate that our AI services marketplace for token holders remains active. Neurotoken (NTK) can still be exchanged for AI services including data labeling, model development, synthetic data creation, AI strategy consulting and fundamental research. Engagements with any of our Neuromation Labs can also be transacted through the Marketplace in exchange for NTK. Please contact us for access and pricing assistance.