AI
Cradle Raises $73M to Expand Protein-Design AI & Wet Lab
2024-11-26
In recent years, the use of AI to accelerate biotech processes has emerged as a standard practice. Companies specializing in deploying such technology are witnessing significant uptake and new investment. One such company is Cradle, which focuses on protein design and has recently raised $73 million to expand its labs and team.

Introduction to Cradle in the Biotech Landscape

Cradle came into existence in 2022 as part of a wave of companies exploring the application of language models in biotech. The founder and CEO, Stef van Grieken, aptly referred to the strings of amino acids and bases as an "alien programming language" that an AI model can partially understand.The company's approach involves accelerating the testing of large biomolecules like proteins. By identifying and recommending sequences that affect desirable qualities, Cradle aims to help biotech and pharma companies achieve their goals more efficiently. For instance, if a company has a useful protein but wants to make it more heat-resistant, Cradle's model can suggest alternative sequences that do not affect its other functions.After a successful $24 million A round in 2023, Cradle has been actively serving customers in the biotech and pharma spaces. Van Grieken emphasized that companies value the acceleration and cost savings achieved through fewer experimental runs.

Experimental Rounds and Their Costs

In biotech research, experimental rounds can be costly, often ranging from tens to hundreds of thousands of dollars. These rounds also take a significant amount of time. Moreover, there is an element of guesswork and luck involved, as careful study and intuition contribute to the results but there is inevitably a lot of unpredictability in the process. Any method that can reduce this uncertainty is highly welcomed.

The Simplicity of Cradle's SaaS Business Model

Cradle's simple SaaS business model has gained popularity. There is no need for companies to worry about royalties, revenue share, or IP issues. This provides a clear advantage and allows companies to focus on their core research and development activities.

Competition in the Biotech AI Space

Van Grieken noted that competition in the biotech AI space can be divided into two groups. One group engages in close partnerships to co-develop drugs or processes, while the other, like Cradle, strictly provides a software service. He believes that AI in drug discovery and development will eventually become a commodity, and every team should have access to it.

The Biotech Laboratory and Model Training

Although Cradle is a software company, it also has a laboratory in Amsterdam. Here, they conduct A/B testing on different types of proteins and develop "Foundational Datasets" that help models learn protein properties beneficial to all customers. Regular training and fine-tuning of models from these datasets are essential for the company's success.The recent $73 million round, led by IVP with Index Ventures and Kindred Capital participating, will be used to build out the wet lab and hire more staff. Van Grieken expressed the goal of putting Cradle's software into the hands of a million scientists.This shows the significant impact that AI is having on the biotech industry and how companies like Cradle are at the forefront of this transformation.
Perplexity Considers Entering the Hardware Market
2024-11-26
Perplexity, the renowned AI-powered search engine, has taken a significant step towards hardware development. Aravind Srinivas, the founder and CEO of Perplexity, recently posted on X about considering creating a "simple, under $50" device that can reliably answer questions through voice. This move has sparked a lot of interest and speculation in the tech industry.

Unlock the Power of AI with Perplexity's Hardware Venture

Perplexity's Hardware Ambitions

Perplexity's foray into hardware is a bold move that holds great potential. With Aravind Srinivas at the helm, the company is aiming to provide a user-friendly device that can seamlessly integrate with the power of AI. The idea of a "simple, under $50" device is particularly appealing as it makes advanced technology accessible to a wider audience. Such a device could revolutionize the way we interact with information and solve problems. 1: The potential of this hardware venture lies in its ability to offer a reliable and efficient way to obtain answers. By leveraging the capabilities of AI, Perplexity can provide accurate and detailed responses to a wide range of questions. Whether it's for personal use or in a professional setting, this device has the potential to become an essential tool. 2: Moreover, the under $50 price tag makes it an attractive option for consumers who may be hesitant to invest in more expensive AI devices. It offers a cost-effective solution without compromising on quality or performance. This could open up new markets and opportunities for Perplexity.

The Landscape of AI Hardware

In recent years, hardware has become a hot topic among high-profile AI startups. The allure of hardware lies not only in its cachet but also in the potential for new forms of interaction. Companies like Midjourney and OpenAI have already made significant strides in this area. Midjourney formed a hardware team in August, while OpenAI CEO Sam Altman is working with ex-Apple design chief Jony Ive on an AI hardware project. 1: These examples show that there is a growing recognition of the importance of hardware in the AI space. New form factors can enable more intuitive and immersive interactions with AI, leading to a better user experience. Perplexity's entry into this arena adds to the competition and innovation in the field. 2: However, it's important to note that hardware development is not without its challenges. As seen with other AI device ventures, such as Rabbit and Humane, there are risks involved. Rabbit's R1, one of the most successful AI devices in recent years, is now available at steep discounts on eBay, indicating potential issues with market demand and delivery. Humane's Ai Pin, on the other hand, faced severe criticism and had to recall units due to safety issues.

Perplexity's Financial Advantage

Perplexity has a significant financial advantage with a lot of cash in the bank and is close to raising around half a billion dollars. This provides the company with the resources needed to invest in hardware development and bring its product to market. 1: Having sufficient funds allows Perplexity to focus on research and development, ensuring that the hardware meets the highest standards. It also gives the company the flexibility to make necessary adjustments and improvements along the way. 2: Additionally, a large financial cushion can help Perplexity weather any potential setbacks or challenges that may arise during the hardware development process. It provides a safety net and increases the chances of success.In conclusion, Perplexity's foray into hardware is a significant development that has the potential to reshape the AI landscape. With its strong financial position and innovative approach, the company is well-positioned to make a mark in the hardware market. However, it will need to navigate the challenges and risks associated with hardware development to achieve true success.
See More
OpenAI Funds Research on Predicting Human Moral Judgements in AI
2024-11-22
OpenAI, through its nonprofit org, is making significant strides in academic research. In a filing with the IRS, it was disclosed that a grant was awarded to Duke University researchers for the "Research AI Morality" project. This move aims to train algorithms to predict human moral judgements in various scenarios involving conflicts among morally relevant features in different fields.

Unraveling the Complexity of Predicting Moral Judgements with OpenAI's Funding

OpenAI's Grant to Duke University

In a filing with the IRS, OpenAI Inc., its nonprofit arm, revealed that a grant was given to Duke University researchers for the "Research AI Morality" project. This initiative is part of a larger, three-year, $1 million grant aimed at studying "making moral AI." While little is publicly known about this "morality" research other than its 2025 end date, it holds great potential. The principal investigator, Walter Sinnott-Armstrong, a practical ethics professor at Duke, couldn't provide details when contacted. However, he and his co-investigator, Jana Borg, have conducted several studies and even written a book about AI's potential as a "moral GPS" to assist humans in making better judgements. They have created a "morally-aligned" algorithm to help determine kidney donation recipients and studied scenarios where people prefer AI to make moral decisions.

The Goal of the OpenAI-Funded Work

According to the press release, the objective of this OpenAI-funded research is to train algorithms to predict human moral judgements in complex scenarios. These scenarios involve conflicts among morally relevant features in medicine, law, and business. It is a challenging task as morality is a nuanced concept that may not be easily achievable with current technology. In 2021, the nonprofit Allen Institute for AI built a tool called Ask Delphi, which aimed to provide ethically sound recommendations. It performed well in judging basic moral dilemmas, such as knowing that cheating on an exam is wrong. However, slight rephrasing of questions could lead Delphi to approve of almost anything, including the unthinkable act of smothering infants.

The Limitations of Modern AI Systems

Modern AI systems, such as machine learning models, are statistical machines. Trained on a vast amount of data from the web, they learn patterns to make predictions. But they lack an appreciation for ethical concepts and an understanding of the reasoning and emotions involved in moral decision-making. This is why AI often parrots the values of Western, educated, and industrialized nations, as the web and its training data are dominated by articles endorsing these viewpoints. Many people's values are not expressed in AI's answers, especially if they don't contribute to the training sets by posting online. AI also internalizes biases beyond a Western bent, as seen in Delphi's view that being straight is more "morally acceptable" than being gay.

The Challenge of Morality's Subjectivity

The challenge before OpenAI and the researchers it is backing is exacerbated by the inherent subjectivity of morality. Philosophers have been debating ethical theories for thousands of years, and there is no universally applicable framework yet. Claude leans towards Kantianism, focusing on absolute moral rules, while ChatGPT slightly favors utilitarianism, prioritizing the greatest good for the greatest number. Determining which approach is superior depends on individual perspectives. An algorithm to predict human moral judgements must consider all these factors. It is a very high bar to clear, and it remains to be seen if such an algorithm is even possible.TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.
See More