The maritime industry is one of the world’s oldest, and the challenges of today require many changes to centuries-old practices. The adoption of digitisation is one such change, and at Sea, we’re playing a pivotal role in driving global maritime trade forward by providing the intelligent marketplace for fixing freight. As the adoption and implementation of Artificial Intelligence (AI) booms, we also have been gearing up to use our unique position in the industry to provide state-of-the-art AI solutions to our customers.
As we embark on our AI journey, we recently attended the second Annual Generative AI Summit in London, a key event for leaders in AI, data, technology, and innovation. It has provided plenty of learnings to inform our progress.
In the first of this three-part blog series, we’ll be taking a deep dive into the key takeaways from the Summit, and how they are shaping our approach to AI adoption. Our starting point is context-specific models – the foundation of today’s most useful AI tools.
Context-specific models: The foundation of effective GenAI
It feels like it’s been a while since the large language models (LLM) burst onto the scene, with the launch of ChatGPT changing the playing field by bringing Generative AI (GenAI) to the masses. But in reality, most businesses are only just at the start of their GenAI journey. Like us, many are actively exploring its potential to revolutionise various sectors.
One of the crucial insights from the summit was the shift from a model-centric approach to a data-centric approach. Until very recently, the focus was on the technical capabilities and strengths of varying machine learning (ML) models. However, recent advances and innovations mean there are now hundreds of openly sourced and available models that companies can choose from.
These are mostly trained on petabytes upon petabytes of data and designed by the heavy hitters in the tech industry. While these models have become increasingly accessible as off-the-shelf solutions for most businesses, it’s becoming clear that general-purpose LLM models of this kind are poorly placed to deliver value for businesses in the long term. Their responses can be generic or even inaccurate and prone to hallucination.
Today, the spotlight is on how business data is structured and labelled, and this is where the true value lies. This is what allows LLMs to improve the accuracy of their responses and reduce hallucinations.
That’s why data quality outranks data quantity. To effectively address specific business use cases, large language models (LLMs) must be enhanced with high-quality, industry-specific data tailored to the particular requirements of the task – be it for optimising sales or managing pre-fixture contracts.
The value of data-first models
Much of the conversation at the Summit focused on the value of feeding foundational models new and more relevant data. This can increase accuracy and fix errors more effectively than the complex process of tweaking the model’s architecture itself, which is what we would do with traditional ML. Another way to boost accuracy is to change the context in which the model operates, rather than altering the model itself.
For instance, tweaking embeddings in Retrieval-Augmented Generation (RAG) setups can significantly improve how well the model retrieves and uses context-specific information, rather than giving it more of that data in the same format. RAG systems also extend the capabilities of more general models by expanding the data they source their responses from with private or domain-specific data – without requiring potentially sensitive data to be used in training.
An easy-to-understand and fun example is to think of language models like a university student: they are well educated, but you still would not put them into a client-facing, critical role without onboarding and speciality training. This tweaking of GenAI models with specific target data is akin to training that student to do a specific business task.
What this means for us at Sea
At Sea, we have embraced data-centricity since our inception, and this principle will naturally extend to our AI implementation efforts. As we continue to explore the path of Generative AI we will stay ahead of the curve and develop and implement GenAI solutions that will suit maritime-specific data, and the use cases that will benefit our customers best.
With our proprietary AI-ready Central Data Platform, we are well placed to use historical maritime data to better support our customers in their digitisation journey. In the coming weeks, we’ll be sharing more details on our AI policy and the future of our work with this revolutionary technology. In the next part of this blog series, we’ll touch on the next step of AI implementation: human-centricity in AI design.
AI Generated images of Andrew Long and Roger Zorlu from Snapmatic AI Photo Booth
Share this article
Don’t miss the latest news and insights - subscribe to our newsletter
For press enquiries, please email news@sea.live