Advai

Advai

IT Services and IT Consulting

We don't make AI, we break it.

About us

We don’t make AI, we break it. Advai is the UK leader in 3rd party Assurance, benchmarking, and monitoring of AI systems. We enable AI Adoption by helping organisations manage the risk, compliance and trustworthiness of their AI systems. Our focus is to identify and mitigate the risks involved in AI adoption by discovering points of failure, in both internally developed and 3rd-party procured technologies. Our tooling mitigates AI-specific security vulnerabilities and ensures the reliability, robustness and trustworthiness of your AI and Generative AI systems. Advai’s monitoring platform aligns with your Governance frameworks, with technical metrics mapped to your risk and compliance needs. We have driven initiatives for the UK Government’s AI Safety Institute, the Ministry of Defence, and leading private sector companies.

Website
http://www.advai.com
Industry
IT Services and IT Consulting
Company size
11-50 employees
Headquarters
London
Type
Privately Held
Founded
2020
Specialties
Artificial Intelligence, Adversarial AI, Adversarial ML, Robust AI, AI Assurance, AI Compliance, and AI Risk Management

Locations

Employees at Advai

Updates

  • View organization page for Advai, graphic

    2,003 followers

    A few days ago, US Lawmakers reminded OpenAI of their pledge to dedicate 20% of its computing resources to research on #AISafety. Roughly one year ago, the #SuperAlignment team was announced to great excitement. Well, we were excited! "We need scientific and technical breakthroughs to steer and control AI systems much smarter than us," "To solve this problem within four years, we're starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we've secured to date to this effort." It will be interesting to see how OpenAI respond, given Vox's recent headline indicating the initiative isn't doing too well... “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded.

    • No alternative text description for this image
  • View organization page for Advai, graphic

    2,003 followers

    This AI cyber attack concept demonstrates the principle of 'ever-changing' in the new age of #generativeAI. It's a virus that can use LLMs to change aspects of its code enough to prevent detection yet retain core functionality. 'Ever-changing' is a key concept in #AI and #AISafety because it's behaviour can be non-deterministic. Surprising new threats and vulnerabilities emerge weekly. Like this cyber attack, AI weaknesses are a moving target! Sometimes, these changes come from a retraining run to the foundation model you use. Perhaps an innocuous guardrail becomes pernicious. Maybe unintended consequences involve the loss of simple core functionalities. Or, the law changes. The standards change. The risk frameworks adapt. With AI you can never 'set and forget'. AI is ever-changing, so your AI Assurance practices must be, too.

    AI-powered 'synthetic cancer' worm represents a new frontier in cyber threats | DailyAI

    AI-powered 'synthetic cancer' worm represents a new frontier in cyber threats | DailyAI

    https://dailyai.com

  • View organization page for Advai, graphic

    2,003 followers

    We really couldn’t have explained this better ourselves, although we might argue it’s just ‘best practice AI’ as opposed to ‘defence-grade’. Certainly true for high impact sectors like #financialservices. #AISafety #DefenceGrade

    View profile for Prof. Nick Colosimo MSc PgC BSc(Hons) CEng FIET FIKE FBCS FRAeS, graphic

    Just an Engineer, Scientist and Tech Guy

    I find myself explaining (a lot) how AI in defence is not simply the popular AI you can just procure and use. To help I've just created this diagram which should be self explanatory. The harder the task and the higher the impact of failure is, the more likely it is you need 'Defence AI' not regular AI or any other kind of AI, but 'Defence AI'. Defence 'grade' AI if you will. What is 'Defence AI' then? 1. AI which has the properties of; appropriate performance (e.g., high accuracy), appropriate explainability, robustness (i.e., an invariance to a variety of real world factors), secure to adversarial activity, manageable (i.e., at edge capable update and upgrade in many cases), low shot learning capable (in some cases), and human-centric. 2. AI that has been architected in-line with good systems engineering, safety engineering, and human factors engineering principles from the start. Often meaning these solutions are; formally policeable (if not formally self-policed); hybrid solutions (i.e., involve more than one branch of AI like neuro-symbolic AI); can be appropriately and readily bounded to suit the operational circumstances. 3. AI that is designed to be operational context relevant. 4. AI that has been designed with the wider system integration in mind. 5. AI that has been tested thoroughly through its lifecycle and in the deployed operational environment. This is why it's often much harder than many think to simply take the latest shiny AI thing and drop it straight into a defence context. I mean, you can, but if the consequences / impact of failure is above a particular threshold (which can vary by operational context) then it may not be such a good idea. The above is not to say that bespoke AI needs to be created for every challenging and impactful task, but rather, heavy adaptation is likely to be necessary for the orange squares of the 9 box model. A caveat in all of this is that, operating procedures, operator training, and human involvement, can of course mitigate for many issues but not all, but all you are really doing here is moving about the trade-space and shifting costs around. And potentially making your cost-capability point worse, depending upon the details. Agree or disagree? What's missing? GOFAI = Good Old Fashioned AI (e.g., logic and automated reasoning including PL & FOL, statistical methods including SLM, etc.). Steven Meers Peter Stockel John Loader MBE Michael Hull Keith Dear Al Brown Heather M. Roff, PhD Zachary Kallenborn

    • No alternative text description for this image
  • View organization page for Advai, graphic

    2,003 followers

    When assuring AI systems, we look at a number of things. The model, the people, the supply chain, the data, and so on. In this article, we zoom into a small aspect of this you might not have come across - #SyntheticData. We explain how 'fake' data can improve model accuracy, enhance robustness to real world conditions, and strengthen adversarial resilience. And why it might critical for the next step forwards in #ArtificialIntelligence. Let us know your thoughts and questions below!

    Authentic is Overrated: Why AI Benefits from Synthetic Data.

    Authentic is Overrated: Why AI Benefits from Synthetic Data.

    Advai on LinkedIn

  • View organization page for Advai, graphic

    2,003 followers

    Seeking to adopt AI, many organisations begin with "Well, what's possible?!", or they ask "What would bring the greatest value to our business?" Assurance is left as an afterthought. That's why the vast majority of proof of concept work gets thrown out. #Risk and #Compliance managers can't be appeased, so it isn't signed off. Once you've compiled your list of possible use-cases, prioritise them against what *can* be assured. It's a young field. Not everything can be. (In fact, most can't be!) Save yourself the hassle and wasted resources. Better to focus on lower value use-cases that have robust assurance methods available, than high value but ultimately risky use-cases. Get in touch if you'd like the assurance perspective. #AIadoption #AIAssurance

    • No alternative text description for this image
  • View organization page for Advai, graphic

    2,003 followers

    Join our very own Chris Jefferson and other thought leaders in #redteaming and #ArtificialIntelligence as they explore the critical role of red teaming in responsible and secure AI. What: techUK hosting a webinar "Red Teaming Techniques for Responsible AI: Best Practices and Resources from Leading Experts" When: 10am – 11am, 16 July 2024 Who: Tess Buckley, Programme Manager, Digital Ethics and AI Safety, techUK Tessa Darbyshire, Responsible AI Manager, Accenture steve eyre, Cyber Technical Director, Alchemmy Nicolas Krassas, Red Team Member, Synack Red Team Chris Jefferson, CTO and Co-Founder, Advai Where: Virtual, secure your free ticket https://bitly.cx/rZcQ #ResponsibleAI #AISafety

    Red Teaming Techniques for Responsible AI: Best Practices and Resources from Leading Experts

    Red Teaming Techniques for Responsible AI: Best Practices and Resources from Leading Experts

    techuk.org

  • View organization page for Advai, graphic

    2,003 followers

    "Arguably the greatest danger presented by #AI is not a Terminator-styled Skynet, a master architect of human annihilation, but instead a destructive emergent collective intelligence without consciousness, agency or intent." If you missed our article on #EmergentIntelligence (and, shiver, Army Ants), then here's your second chance! #AISafety

    Ant Inspiration in AI Safety: Our Collaboration with the University of York

    Ant Inspiration in AI Safety: Our Collaboration with the University of York

    Advai on LinkedIn

Similar pages

Browse jobs