Network Restricted Databricks UCX Installation
A guide on installing UCX on Databricks CLI without opening up a restricted network to allow external services, e.g., GitHub access.
Choosing a Large Language Model (LLM) for an AI project is not a one-size-fits-all situation. With all the options available today, the decision making process can be daunting. This implementation-neutral guide will provide insights into the advantages and potential pitfalls when making such a decision.
Whether you're looking to quickly implement a solution for a specific application, or seeking to build an LLM that aligns with your organization's needs, the questions below will steer you in the right direction. By the end, you'll be better equipped to evaluate the needs of your project, and which LLM to use.
By far the biggest distinction between LLM offerings is where and how those are hosted.
LLMs can be obtained through a provider, or online service. These may bundle up additional capabilities and services around the LLM itself. They operate on a software-as-a-service (SaaS) business model, not unlike more conventional Cloud and Data services. Provider offerings can be a fast path to gaining access to LLMs, enabling developers to start experimenting and integrating AI capabilities into their projects without the effort of setting up and managing infrastructure.
Self-hosting a Large Language Model involves running the model on infrastructure within your own environment. As data stays within the environment, security and data can be less of an issue, in comparison to using a provider. The models themselves can be obtained online, but must be hosted and managed with suitable infrastructure, either on-premesis or in the Cloud.
The following only applies to Provider-based LLM solutions.
The following only applies to self-hosted solutions.
The following apply to both provider-based and self-hosted LLM solutions.
The cost of LLMs may be difficult to quantify, but is a worthwhile concern for both the short and long term use of the product. A useful tool for discovering the cost of some Platform implementations is: LLM Price.
LLM Case studies are a great way to gather more information to help drive decision making. They typically real-world examples of companies that have successfully implemented both provider and self-hosting solutions.
For example, an analysis of four such studies was featured in Carnige Mellon's Software Engineering Institute blog, back in 2023. It contains some insights about ChatGPT 3.5, and about the studies themselves.
Look into detailed performance benchmarks for your models. Consider directly comparing provider-hosted LLMs with self-hosted ones. Important metrics include latency, throughput, and scalability.
One such approach was conducted by LMSys Org, that used an ELO raking system. We wrote about this back in May of this year: Has Anthropic Surpassed OpenAI?
Consider LLMs integrate with existing systems and workflows, for both provider-based and self-hosting scenarios. Look for documented compatibility issues and solutions for integrating LLMs with popular platforms and tools.
Llama Index has one sucy compatibility matrix for a few popular Paid LLM APIs and some popular tools: LLM compatiblity tracking
As LLMs have access to, or are themselves, data that needs to be secured. Consider comparsions of known risks for models in scope, and the different security postures of provider and self-hosted offerings.
OWASP has distilled a lot of general guidance around security and LLM into a useful document: OWASP Top 10 for LLM applications
Depending on your business, you may be bound by special legal and industry regulations. Understanding how an LLM-based solution complies or aggravates that compliance is crucial to success. This is most present in self-hosted solutions, but can apply to provider-based LLMs as well.
LLMs are a big part of the fast-moving AI space, with new ideas and technologies emerging around LLMs all the time. Projecting where your solution will be in the years to come is challenging, but worth looking into. That in turn may impact the decision to self-host or to use a provider-based solution.
Innovations around LLMs can also alter how a given product may perform. Concepts like federated learning and edge computing can dramatically change how an LLM performs, costs, and sets the stage for the future.
Given the phases of design, build, compliance, cybersecurity and scaling to production operation, there are many new considerations to make for the first time.
As a team of professionals, we assist companies to deliver successful solutions. If you're setting up a project or require further information like this, feel free to contact us today.
Read more about the latest and greatest work Rearc has been up to.
A guide on installing UCX on Databricks CLI without opening up a restricted network to allow external services, e.g., GitHub access.
Our seasoned engineers at Rearc are here to share their insights for navigating anything spooky in your next digital transformation project
The Art of Hiring: How Rearc Matches Top Talent
LLM and Copyright
Tell us more about your custom needs.
We’ll get back to you, really fast
Kick-off meeting