AI Appreciation Day – Part One

AI Appreciation Day – Part One

On AI Appreciation Day, Intelligent CIO has gathered commentary from industry leaders who share their perspectives on the state of AI, its benefits and the challenges toward responsible use.

Aviral Verma, Lead Threat Intelligence Analyst, Securin 

We are on course towards Artificial General Intelligence, or AGI, where AI goes beyond imitation and can exhibit human-like cognitive abilities and reasoning. AI that can grasp the nuances of language, context and even emotions. I understand the side of caution, the fear of AI replacing humans. But I envision this evolution to enhance human-AI symbiotic relationships, where its true potential lies in complementing our strengths and weaknesses.  

Humanity is a race of creators, inventors, thinkers and tinkerers; AGI can help us be even better at all those things and act as a powerful amplifier for human ingenuity.   

To promote safety for all users and responsible AI deployment, developers must uphold Choice, Fairness, and Transparency as three critical design pillars:   

  • Choice: It’s essential that individuals have meaningful choices regarding how AI systems interact with them and affect their lives. This includes the right to opt-in or opt-out of AI-driven services, control over data collection and usage and clear explanations of how AI decisions impact them. Developers should prioritize designing AI systems that respect and empower user autonomy.   
  • Fairness: AI systems must be developed and deployed in ways that ensure fairness and mitigate biases. This involves addressing biases in training data, algorithms and decision-making processes to prevent discrimination based on race, gender, age or other sensitive attributes. Fairness also encompasses designing AI systems that promote equal opportunities and outcomes for all individuals, regardless of background or circumstances.   
  • Transparency: Transparency is crucial for building trust in AI systems. Developers should strive to make AI systems understandable and explainable to users, stakeholders and regulators. This includes providing clear explanations of how AI decisions are made, disclosing limitations and potential biases, and ensuring transparency in data collection, usage and sharing practices. Transparent AI systems enable scrutiny, accountability and informed decision-making by all parties involved.   

The tech industry is on the edge of something truly exciting, and I am optimistic about the advancements individuals and organizations can achieve with AI.  

To build confidence in AI, we should focus more on Explainable AI (X-AI). By clarifying AI’s decision-making processes, X-AI can alleviate the natural skepticism people have about the “black box” nature of AI. This transparency not only builds trust but also lays a solid foundation for future advancements. With X-AI, we can move beyond the limitations of a “black box” approach and foster informed, collaborative progress for all parties involved.   

Anthony Cammarano, CTO & VP of Engineering, Protegrity 

On this AI Appreciation Day, we reflect on AI’s remarkable journey to an everyday consumer reality. As stewards of data security, we recognize AI’s transformative impact on our lives. We celebrate AI’s advancements and democratization, bringing powerful tools into the hands of many. Yet, as we embrace these changes, we remain vigilant about the security of the data that powers AI.    

Vigilance takes understanding the nuances of data protection in an AI-driven world. It takes a commitment to securing data as it traverses the complex pipelines of AI models, ensuring that users can trust the integrity and confidentiality of their most sensitive information. Today, we appreciate AI for its potential and challenges, and we renew our commitment to innovating data security strategies that keep pace with AI’s rapid evolution.   

As we look to the future, we see AI not as a distant concept but as a present reality that requires immediate attention and respect. We understand that with this great power comes great responsibility and we are poised to meet the challenges head-on, ensuring that our data – and, by extension, our AI – is as secure as it is powerful. Let’s continue to appreciate and advance AI, but let’s do so with the foresight and security to make its benefits lasting and its risks manageable.   

Kathryn Grayson Nanz, Senior Developer Advocate, Progress 

This AI Appreciation Day, I would encourage developers to think about trust and purposefulness. Because when we use AI technology without intention, we can actually do more harm than good.  

It’s incredibly exciting to see Gen AI develop so quickly and make incredible leaps forward. But it’s also a responsibility to build safely with a fast-moving technology.   

It’s easier than ever before to take advantage of AI to enhance our websites and applications, but part of doing so responsibly is being aware of the inherent risk – and doing whatever we can to mitigate it. Keep an eye on legal updates and be ready to potentially make changes in order to comply with new regulations. Build trust with your users by sharing information freely and removing the “black box” feeling as much as possible. Make sure you’re listening to what users want and implementing AI features that enhance – rather than diminish – their experience. And establish checkpoints and reviews to ensure the human touch hasn’t been removed from the equation, entirely.   

Arti Raman, CEO and Founder, Portal26 

Generative artificial intelligence (GenAI) offers employees and the C-suite a new arsenal of tools for productivity compared to the unreliable AI we’ve known for the past couple of decades, but as we celebrate these advancements this AI Appreciation Day, it’s less clear how organizations plan to make their AI strategies stick. They are still throwing darts into the dark, hoping to land on the perfect implementation strategy.    

For those looking to make AI work for them and mitigate the risks:   

1. The technology to address burning security questions regarding GenAI has only been around for approximately six months. Many companies have fallen victim to the negative consequences of GenAI and its misuse. Now is the time to ask: ‘How can I have visibility into these large language models (LLMs?).’   

2. The long-term ability to audit and have forensics capabilities across internal networks will be crucial for organizations wanting to ensure their AI strategies work for them – not against them.    

3. These core capabilities will ultimately drive employee education and knowing how AI tools are best utilized internally. You can’t manage what you can’t see or teach what you don’t know. Having the ability to see, collect and analyze how employees use AI, where they’re most using it and what they’re using is invaluable for long-term strategy.    

AI has marked a turning point globally, and we’re only at the beginning. As this technology evolves, so must our approach to ensuring its ethical and responsible usage.   

Roger Brulotte, CEO, Leaseweb, Canada 

In an age where “data readiness” is crucial for organizations; the rapid adoption of AI and ML highlights the need of cloud computing services. Canada stands as a pioneer in this technological wave, with its industries using AI to drive economic growth. Montreal is quickly establishing itself as an AI hub with organizations like Scale AI and Mila – Quebec Artificial Intelligence Institute.   

Companies working with AI models need to manage extensive data sets, requiring robust and flexible solutions to manage complex tasks, training large datasets and neural network navigation. While the fundamental architecture of AI may remain constant, scaling the components up and down is essential depending on the model’s state.  

As the data-driven landscape keeps evolving, organizations must select data and hosting providers who can keep up with the times and adjust as needed, especially as Canada implements its spending plan to bolster AI on a national level.   

On AI Appreciation Day, we recognize that superior AI outcomes are powered by data, which is only as effective as the solutions that enable its use and safeguarding.   

Steve Wilson, CPO, Exabeam 

My recognition of AI Appreciation Day is part celebration, part warning for fellow AI enthusiasts in the security sector. We’ve seen AI capabilities progress dramatically, from simple bots playing chess, to self-driving cars and AI-powered drones with a spectacular potential to truly change how we live. While exciting, AI innovation is often unpredictable. Tempering our enthusiasm is the sobering reality that rapid progress – while filled with opportunity – is never without its challenges.    

The fascinating evolution of AI and self-learning algorithms have presented very different obstacles for teams in the security operations center (SOC), to combat adversaries. Freely available AI tools are assisting threat actors in creating synthetic identity-based attacks using fraudulent images, videos, audio, and more.  

This deception can be indistinguishable to humans – and exponentially raise the success rate for phishing and social engineering tactics. To defend, security teams should also be armed with AI-driven solutions for predictive analytics, advanced threat detection, investigation and response (TDIR), and exceptional improvements to workflow.   

Before jumping headlong into the excitement and potential of AI, it’s our responsibility to evaluate the societal impacts. We must address ethical concerns and build robust security frameworks. AI is already revolutionizing industries, creating efficiencies and opening possibilities we never could have imagined just a few, short years ago. We’re only getting started and by proceeding with cautious optimism, we can remain vigilant to the obvious risks and avoid negative consequences, while still appreciating AI’s many benefits.  

Ken Claffey, CEO, VDURA

As we observe AI Appreciation Day, it’s worth pointing out that an AI model is only as good as the training data behind it. As we appreciate AI then AI should be appreciating the data. For data to be useful, it has to be stored safely. That means it has to be scalable, durable, and available. It also must have integrity (free from corruption) and you must be able to get it fast and at scale (performance).   

AI is evolving rapidly, and that pace of innovation and model evolution is something I think we can all really appreciate. It challenges all the ecosystem players, like us, to equally rapidly evolve our capabilities, something we are always hard at work at. 

Browse our latest issue

Intelligent CIO Middle East

View Magazine Archive