Cutting-Edge Monster API Platform Transforms AI Development: Up to 90% Cost Reduction Achieved Through Decentralized Computing


Share post:

Today Monster API is launching its platform to provide developers access to GPU infrastructure and pre-trained AI models at a far lower cost than other cloud-based options, delivering extraordinary ease of use and scalability. It utilizes decentralized computing to enable developers to quickly and efficiently create AI applications, saving up to 90% over traditional cloud options.

The company secured $1.1 million in pre-seed funding led by Carya Ventures, with participation from Rebright Partners.

Generative AI is disrupting every industry from content creation to code generation, however the application development process in the machine learning world is very expensive, complex and hard to scale for all but the most sophisticated and largest ML teams.

The new platform allows developers to access the latest and most powerful AI models (like Stable Diffusion) ‘out-of-the-box’ at one-tenth the cost of traditional cloud ‘monopolies’ like AWS, GCP, and Azure. In just one example, an optimized version of the Whisper AI model on Monster API led to a 90% reduced cost as compared to running it on a traditional cloud like AWS.

Using Monster API’s full stack (an optimization layer, a compute orchestrator, a massive GPU infrastructure, and ready-to-use inference APIs) a developer can create AI-powered applications in a matter of minutes. In addition, one can fine-tune these large language models with custom datasets.

Also Read: Understanding and Implementing AI and ML Algorithms for Developers

“By 2030, AI will impact the lives of 8 billion people. With Monster API, our ultimate wish is to see developers unleash their genius and dazzle the universe by helping them bring their innovations to life in a matter of hours,” said Saurabh Vij, CEO and co-founder, Monster API.

“We eliminate the need to worry about GPU infrastructure, containerization, setting up a Kubernetes cluster, and managing scalable API deployments as well as offering the benefits of lower costs. One early customer has saved over $300,000 by shifting their ML workloads from AWS to Monster API’s distributed GPU infrastructure,” Vij continued. “This is the breakthrough developers have been waiting for: a platform that’s not just highly affordable but it is also intuitive to use.”

“Just as the first browser opened up a portal and allowed masses to interact with the internet, we believe this innovation will bring a tsunami of AI-powered applications to the world,” added Vij.

How it Started                                      

What if you could run the latest AI models on someone’s crypto mining rig or an Xbox?

Monster API is the result of the personal experience of the two brothers who founded the company, Saurabh Vij and Gaurav Vij. Gaurav faced a significant challenge in his computer vision startup when his AWS bills skyrocketed, putting immense financial strain on his bootstrapped venture. In parallel, Saurabh, formerly a particle physicist at CERN (European Council for Nuclear Research), recognized the potential of distributed computing in projects (like LHC@home and folding@home).

Inspired by these experiences, the brothers sought to harness the computing power of consumer devices like PlayStations®, gaming PCs, and crypto mining rigs for training ML models. After multiple iterations, they successfully optimized GPUs for ML workloads, leading to a 90% reduction in Gaurav’s monthly bills.

“We were jumping with excitement and felt we could help millions of developers just like us building in AI,” said Gaurav Vij, Monster API co-founder.

Also Read: Top Programming Languages to Develop Android Apps

Monster API Offerings

Monster API allows developers to access, fine-tune and train ML models. It enables:

  • Access to the latest AI models: Integrates the latest and most powerful AI models–like Stable Diffusion, Whisper AI, StableLM, etc.–with scalable, affordable and highly available REST APIs through a one-line command interface, without the complex setup.
  • Fine-Tuning: Enables developers to enhance LLMs with Monster API’s no code fine-tuning solution, simplifying the development by specifying hyperparameters and datasets. Developers can fine-tune the open source models like Llama and StableLM to improve response quality for tasks like instruction-answering and text-classification to achieve ChatGPT-like response quality.
  • Training: The Monster API decentralized computing approach enables access to tens of thousands of powerful GPUs like A100s–on demand–to train breakthrough models at substantially reduced costs. (This is unlike traditional model training that is very expensive, hard to access and thus limited to a few well-funded businesses. The containerized instances come pre-configured with CUDA, AI frameworks, and libraries for a seamless managed experience.)

Monster API Benefits

  • Predictable API billing: Unlike (the standard) “pay by GPU time,” Monster API offers billing based on the API calls, leading to a simpler, more predictable and better expense management.
  • Auto-scalable, reliable APIs: APIs automatically scale to handle increased demand, ensuring reliable service (from one to 100 GPUs).
  • Affordable global scalability: The decentralized GPUs network enables geographic scaling.

“Generative AI is one of the most powerful innovations of our time, with far-reaching impacts,” said Sai Supriya, Carya Venture Partners managing partner. “It’s very important that smaller companies, academic researchers, competitive software development teams, etc. also have the ability to harness it for societal good. Monster API is providing access to the ecosystem they need to thrive in this new world.”

TalkDev Bureau
TalkDev Bureau
The TalkDev Bureau has five well-trained writers and journalists, well versed in B2B enterprise technology industry, and constantly in touch with industry leaders for the latest trends, opinions, and other inputs- to bring you the best and latest in the domain.

Related articles

How to Overcome Version Control Challenges

Version control systems allow teams to track changes, revert to previous versions, and collaborate more effectively. Despite its importance,...

Arm Unveils Arm ASR: A Mobile-Focused Upscaling Tech Rivaling AMD’s FSR2

Arm has introduced its upscaling technology called Arm Accuracy Super Resolution (Arm ASR), based on AMD’s FidelityFX Super...

Wherefour Unveils New Amazon FBA Integration to Enhance Inventory Management

Wherefour ERP inventory management software has introduced a direct integration to link customers directly to their Amazon FBA...

IGEL Advances Microsoft Partnership with New Windows 365, Intune, and Azure Stack HCI Integrations

IGEL has deepened its collaboration with Microsoft, introducing new integrations and features for its endpoint OS to support...