📣 Hey, #Developers! In 24 hours, we have the new Llama 3.1 405B running on our SN40L RDUs. Armed with higher memory capacity on our state-of-the-art architecture, we can run the model with: 🎯 The highest precision 💻 Fewer chips 💡 Less energy Sign up to our API Program now and enjoy early access: https://lnkd.in/g9W_Bnjv See Llama 3.1 405B in action. ⚡ https://fast.snova.ai #FastAI #LLM #MetaAI
About us
AI is changing the world and at SambaNova, we believe that you don’t need unlimited resources to take advantage of the most advanced, valuable AI capabilities - capabilities that are helping organizations explore the universe, find cures for cancer, and giving companies access to insights that provide a competitive edge. We’ve built an enterprise-ready AI platform from the ground up - intentionally designed for the most valuable and complex AI workloads of today and tomorrow. Using our platform to build a technology backbone for the next decade of AI innovation, organizations get pre-trained Foundation Models that truly transform the way they gain value from AI and deep learning. And, with our flagship offering, Dataflow-as-a-Service™, we help them realize value 22x faster. SambaNova was founded in 2017 in Palo Alto, California, by a group of industry luminaries, business leaders, and world-class innovators who understand AI. Today, we’ve built an incredibly smart and motivated team dedicated to making a lasting impact on the industry and equipping our customers to thrive in the new era of AI.
- Website
-
http://www.sambanova.ai
External link for SambaNova Systems
- Industry
- Computer Hardware Manufacturing
- Company size
- 201-500 employees
- Headquarters
- Palo Alto, CA
- Type
- Privately Held
- Founded
- 2017
- Specialties
- High Performance Computing, Artificial Intelligence, Machine Learning, GPT3, Foundation Models, Deep Learning, Computer Vision, True Resolution, 3D Image Analysis, Recommendation, AI Platform, Large Language Models, AI for Science, and Generative AI
Locations
-
Primary
Bayshore Fwy
Palo Alto, CA 94303, US
Employees at SambaNova Systems
Updates
-
By now, you've seen that Meta's Llama 3.1 is out. Tom's Guide gives you a variety of options to try it out for free: https://lnkd.in/gAyZyR2X Want to try it without signing up? Go to fast.snova.ai ⚡️ #Meta #GenerativeAI #LLM
-
In his piece for InfoWorld, Anirban Ghoshal cites SambaNova’s Anton McGonnell on the changes he anticipates to see within the industry as more people utilize Meta’s Llama 3.1: “'We expect to see developers use techniques like speculative decoding, where less complex models handle the bulk of processing, and then call upon the larger model to verify work and correct errors when needed,' McGonnell said, adding that this could be an efficient way to run AI models as it opens new avenues for optimizing computing resources and speeds up responses in real-time applications." Read the article here: https://lnkd.in/ge3qVbY2 #LLM #FastAI #GenAI
Why Meta’s Llama 3.1 is a boon for enterprises and a bane for other LLM vendors
infoworld.com
-
Things you need to know to keep with the trajectory of innovation in AI: 🖥️ RAG, foundation models, LLMs, and more 🚀 Successfully evaluating and deploying these technologies 🏁 Staying ahead of the competition to maximize the potential of gen AI for your enterprise Get the Gartner Hype Cycle for Generative AI report: https://lnkd.in/gHiNpfYk #AI #GenAI #AIChips
-
-
Want to get a head start on your next project? With SambaNova Fast API, you can bring your own custom checkpoint with free token-based credits to make your life easier — all while experiencing lightning-fast inferencing speed. ⚡️⚡️⚡️ Learn more: https://lnkd.in/g9W_Bnjv #FastAI #LLM #API
-
SambaNova Systems reposted this
Thrilled to read Jeremy Kahn's piece on Fortune highlighting Anton McGonnell's thoughts on the newly released Llama 3 405B!
AI Editor at Fortune Magazine. Author of the forthcoming book Mastering AI: A Survival Guide to Our Superpowered Future (Simon & Schuster, July 2024; Bedford Square, August 2024).
My take on Meta’s new Llama model in today’s Fortune Eye on AI newsletter. Also a look at research into a hybrid neural network-symbolic AI hybrid and the results of Sam Altman’s OpenResearch foundation’s universal cash transfer experiment.
Is Meta's new Llama AI model a game changer?
fortune.com
-
In an article for The Stack, Jasper Hamill offered some details on the massive GPU farm used to train Meta’s Llama 3.1 405B, released just yesterday. His piece also includes insights from Anton McGonnell, SambaNova’s Head of Software Products, on how this release tackles the challenge that developers face when building specialized AI models: “[The latest Llama] offers a new foundation for developers to create rich, unrestricted datasets … This means developers can freely use distilled outputs from Llama 3 405B to train niche models, dramatically accelerating innovation and deployment cycles in specialized fields. Expect a surge in the development of high-performance, fine-tuned models that are both robust and compliant with open-source ethics.” Read the full article on The Stack: https://lnkd.in/gFKQXd7S #AI #GenerativeAI #LLM
-
-
Fascinating read from Jasper Hamill of The Stack on Llama 3.1 405B and the gigantic GPU farm used to train it! He also includes insights from SambaNova's Anton McGonnell on why this release dramatically accelerates innovation and deployment cycles in specialized fields. What are your thoughts? 💭
Meta has earmarked an increased budget of up to $40 million of capital expenditure as it invests heavily in AI. The sheer size of the GPU cluster used to train the latest version of its #opensource #LLM Llama gives a sense of where some of that money might end up being spent. Read about Meta's open source vision and find out why Mark Zuckerberg thinks the industry is at an "inflection point":
Meta Llama 3.1 405B LLM trained on mammoth GPU farm
thestack.technology
-
Thrilled to read Jeremy Kahn's piece on Fortune highlighting Anton McGonnell's thoughts on the newly released Llama 3 405B!
AI Editor at Fortune Magazine. Author of the forthcoming book Mastering AI: A Survival Guide to Our Superpowered Future (Simon & Schuster, July 2024; Bedford Square, August 2024).
My take on Meta’s new Llama model in today’s Fortune Eye on AI newsletter. Also a look at research into a hybrid neural network-symbolic AI hybrid and the results of Sam Altman’s OpenResearch foundation’s universal cash transfer experiment.
Is Meta's new Llama AI model a game changer?
fortune.com
-
Today Jeremy Kahn wrote in Fortune about Meta’s new Llama model and how it could be a game changer — but there are a lot of unknowns. Jeremy references our very own Anton McGonnell in the Fortune article. Here are some of his insights: 1️⃣ Llama 3.1 405B might be a game changer because it will allow two things: one is that companies can use the 405B parameter model to create synthetic datasets that can be used to train or fine-tune small open models to hone them for specific applications. This “distillation” process has been possible before but there were often ethical concerns about how the data used for “distillation” had been sourced (with data being scraped from the web without consent, or derived from the use of poorly paid human contractors). 2️⃣ Anton also applauded Meta’s decision to release Llama 3.1 405B as part of a family of Llama models of different sizes (there are also upgraded 70 billion- and 8 billion-parameter models) and to release a “Llama stack.” This is a set of related software built on top of and around the AI models themselves. Meta's AI stack includes guardrails software, to prevent the AI models from generating harmful or dangerous content, and security software to try to prevent prompt injection attacks against the Llama models. The family of models and the AI stack, McGonnell said, create the possibility of chaining open models together in a way that would be especially cost-effective—using a process in which parts of a user’s query or an application are handled by small, fine-tuned models, and only those more difficult aspects that these models can’t handle are handed off to the full-scale 405 billion parameter model. If you’re subscribed to Fortune, you can read the full article here: https://lnkd.in/gQU-5u7S You can also access the article on Yahoo Finance: https://lnkd.in/g_qsHrfX #AI #GenerativeAI #LLM
-