As Lemongrass’ CIO, Kausik Chaudhuri is laser-focused on advancing the brand’s intellectual property initiatives, which include working on Lemongrass Cloud Platform (LCP), its multi-cloud control plane.
Lemongrass, a software-enabled services provider, synonymous with SAP on cloud, enables its customers to migrate and operate on AWS, Microsoft, and Google. Chaudhuri’s role involves ensuring that the teams overseeing customer migration and steady-state operations have the best tools - whether those be in-house or third-party - for the appropriate actions.
A seasoned tech professional and award-winning global IT executive , Chaudhuri’s 25-year career has seen him spend almost a decade at Dell and another 10 years at HP. At Virtustream, a former Dell private cloud infrastructure and services business, Chaudhuri oversaw both migration projects as well as managed services, with his role at HP seeing him operate in a similar capacity.
With a varied technical background, Chaudhuri acknowledges how essential it is to recognise all data has value and context, which is further highlighted when training AI tools, as the algorithms demand a wealth of various types of information to make informed decisions.
Here, he delves into the issues surrounding AI if it is viewed as a cure-all technology, and how the value of AI comes from the way it is trained and implemented.
How does the way AI is trained add value to it?
We must acknowledge the presence of biases and subjectivity in data, just as in people and society. Data inherently carries biases because it originates from tools designed by humans. Biased data inevitably leads to biased decisions. This can manifest as representational bias and subjectivity, which are inherent to human nature. Unfortunately, the data collected over the years often reflects these undesired traits, which hinder unbiased responses when querying in AI systems.
Another critical aspect is that data is ever-changing. Data that was accurate and valid a decade ago may no longer be accurate today. As a result, the decisions based on historical data may need to be re-evaluated as circumstances change.
Finally, there is the significance of domain expertise. Those responsible for curating data fed into AI tools must possess a strong understanding of the subject matter context. Without this domain expertise, the data loses its relevance and validity. For example, if someone with no background in automotive technology was tasked with collecting data for an AI system designed to drive a car, the data they gather would likely be inadequate for the AI or ML tool to make critical driving decisions.
Why can't you put data into an AI tool and expect it to understand? What does this mean when it comes to the bias of output?
The necessity for extensive subject matter expertise is paramount when feeding data to AI tools. Without it, the data loses its value and relevance. Without the right domain knowledge, there’s a serious risk of potential biases that could arise in the AI's decision-making process. Biased data, when fed into the AI system without the proper expertise, can result in unreliable outputs. Domain knowledge is absolutely critical.
How can organisations use AI to its full effect and avoid any errors in output?
Clearly define your objectives – this is the fundamental starting point. When there's ambiguity about what you're aiming to achieve, the training process alone won't suffice. Misalignment between your objectives and the process of data collection and AI training can result in insufficient outcomes.
We can use the example of data collected from cameras on a vehicle. Over time, the quality of the data has evolved, rendering the data used to train AI systems a decade ago less relevant today. As data changes, its quality becomes paramount in influencing the decision-making process.
Data is also full of noise. Taking the example of vehicle cameras again, the valuable data relates to objects found on the road, not other environmental surroundings, like people standing on nearby balconies. Removing unnecessary noise from the data enhances its quality.
Again, human oversight is also vital. AI tools require human governance to prevent unintended actions or outcomes.
And finally, you need to implement testing and validation. Building a model based on data is only half the equation; it must be verified to ensure it performs as expected. This involves collecting and training the AI tool and creating a pilot environment for testing, ensuring that it aligns with your anticipated outcomes.
How do you predict the future of AI looking if it is monitored, trained and implemented ethically and properly?
There are a few sectors where we can expect to see rapid implementation of AI. For example, transportation, healthcare, scientific research, customer experience, manufacturing and the Government sector will all benefit from machine learning and AI tools.
One thing is certain, AI will provide a platform from which human workers can grow and develop – it will assist the current workforce, not replace it. In the words of Albert Einstein, “Computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together they are powerful beyond imagination.”
Other magazines that may be of interest - Mobile Magazine.
Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.