Interview: Pilar Baltar Abalo - AI Leadership, Cultural Adaptation & The Limits of Generative AI
From Computational Linguistics to AI Transformation—What Every Leader Must Know
Pilar´s career has come full circle in AI. She started using machine learning translation models and exploring the foundations of what would become today´s AI revolution as a professional technical translator and PhD researcher in computational linguistics. Outside of academia, she went on to lead large-scale technology transformations in cybersecurity, cloud and digital for global enterprises. Safely scaling and orchestrating GenAI for the enterprise is her current focus. With an impressive background, Pilar complemented her PhD in Computational Linguistics, with advanced leadership education with an MSc in Major Programme Management from the University of Oxford and a Global Executive MBA from IESE Business School, equipping her with a rare blend of deep technical expertise and strategic vision.
In this conversation, we explore not only GenAI´s potential but also its limitations; why hallucination, accuracy and human oversight matter and what is relevant to shape the future of AI.
A non-linear career path. Pilar, your career path is quite unique. You didn’t start in tech, yet you've built an impressive track record leading AI-driven transformations. Tell me more about your journey.
I have always loved machines! As a child, I would go into my dad´s office dismantle machines and then put them back together. That early curiosity led me to work as a technical translator, later to computational linguistics and machine translation. The intersection of language and computers is my geekdom of choice. I got my first role in technical consulting thanks to being multilingual and having a passion for technology. Being technology agnostic and focusing on the business value meant that I became proficient at helping clients navigate technology paradigm shifts (cybersecurity, cloud, digital and now enterprise AI).
Today, AI is powerful and articulate—sometimes so much so that people mistake it for real intelligence. But it has clear limitations. What do you see as the biggest risks for businesses adopting AI?
You’re right - AI can sound confident, even when it is wrong. When retrieving and summarising information, LLMs speed and power far exceed what we humans can do. Confidence, speed, and huge retrieval power: These features correlate highly with extraordinary cognitive ability in humans. The risk of tagging a model as intelligent is that the adjective masks awareness about what the system cannot do well or at all. Missing out on common sense knowledge, memory or world models means that LLMs can fail spectacularly at tasks humans find easy (and take for granted). Setting the right expectations about GenAI to be able to trust GenAI is key to adoption.
To get value from AI, leaders must understand:
✅What GenAI is great at:
Summarising texts and retrieving language-based information (where an answer with a chance of error now is more valuable than a perfect answer much later)
Generating creative content and code, both with human oversight (these areas where a human expert can see which answers are wrong quickly and/or easily)
Brainstorming and ideation
Enhancing individual productivity
Interactive problem-solving when you are an expert
❌Where GenAI falls short
Accuracy and veracity. GenAI hallucinates and makes up facts. Do not use it without expert human supervision where even a small percentage of wrong answers would not be acceptable or when you lack the expertise to discern right from wrong answers.
Real-world critical decision-making. AI lacks true reasoning, moral (legal) discernment, common sense and contextual understanding.
Where empathy and human discernment are needed, expected or valued. AI does not think or feel.
So even as GenAI gets more powerful, businesses must remain critical and supervisory…
Exactly! GenAI is a tool, not a thinking entity. It has no personhood (for now) and therefore cannot take accountability. Human supervision is needed to take accountability for
Fact-checking GenAI-generated content
Having clear data governance processes (bad data input pollutes GenAI)
Defining clear oversight processes with human-in-the-loop, guardrails and monitoring.
Designing operating models for human-GenAI collaboration and orchestration
This is specially critical on regulated industries like finance, healthcare and law where the wrong answers impact lives. Particularly when designing agents with an ability to interact unsupervised with the physical and digital world at large.
You’ve said we need to redefine intelligence. What do you mean by that?
GenAI is impressive at next token pattern prediction at scale and speed, but that is not the same as human intelligence. While there isn’t a universally accepted definition of human intelligence, human intelligence cannot be reduced to the ability to perform certain discreet tasks or processes and generate language tokens. There is an old saying in the AI field: intelligence is whatever machines cannot do yet. Once we understand how a computer (or a system) does well something that we thought only humans could do, we stop considering this a defining trait of intelligence: the magic is gone and it becomes just a collection of software features.
So we need to move beyond the hype and think critically about what GenAI does.
Yes! Ultimately GenAI is a general-purpose technology. We need to think critically about how best to use it to our advantage. It does not come with predefined use cases so the more you use it the more you are able to discern where you get maximal value and where it just wastes your time or obfuscates. This is where curiosity: the defining characteristic of human intelligence for many shines. GenAI can be a really powerful thinking aid in interactive problem-solving for experts and an incredibly patient and effective tutor for learners. The human brain is adaptive. How we use it changes as we develop tools: the advent of printing or smartphones, changed how and what we commit to memory. Research seems to indicate that regular use of Gen AI changes cognitive patterns. Brain neuroplasticity is a thing of beauty and responsibility at the same time: what we choose to do with it influences it. With GenAI, the ideal scenario is that each of us is very intentionally making a conscious choice of which parts of our thinking process we delegate to GenAI and which parts we keep to ourselves. My rule of thumb: if a cognitive process (writing, translating, critical thinking) is something I value and enjoy, I´ll keep doing it. If AI is better or faster at it than me, I´ll try and do it in parallel with AI: it helps me get better at it.
AI literacy is key here as people should be aware of how the technology works and what are the risks and limitations before delegating tasks to it. For companies, this means investing as much in the humans who will use AI as they do in the technology itself.
GenAI lacks emotional intelligence, yet it’s being used in areas like customer service, HR, and mental health. Can we embed empathy into GenAI?
GenAI can be really good at simulating empathy, but we need to remember it does not feel or remember. There is a risk, particularly when using agentic AI, in not disclosing the fact that empathy is coming from an AI agent and not a real person. The risk of misplaced trust is high, as is the disappointment when the patient or customer discovers that this trust was not placed in a human. Hallucinations and ethical misalignment risks should be considered carefully in a context where high empathy is required. Leaders should ensure there is human supervision, ethical guardrails and a clear human escalation path for sensitive interactions.
One major limitation of current GenAI models is short-term memory. Where do you see GenAI going next?
GenAI today operates like someone with an infinite recall of internet-available language corpora and a short-term memory problem. The longer your query the more likely you are to encounter problems: context limits, hallucinations, repetitions and inconsistencies. Current memory approaches are inelegant, ineffective and resource-inefficient. Future GenAI systems will need to address both long-term memory (sense-making, intent, what they keep and what they forget about interactions with a user) and broader context windows to make interactions more cohesive and relevant. From a governance perspective, any memory solution will to need address privacy and bias risks as well providing a reinforcement learning feedback loop for wrong responses and hallucinations.
As GenAi models are rapidly becoming a commodity development focus will shift to value generation in the quality of integration with application and database layers. In no particular order to accelarate speed to trust and wider enterprise adoption: data source transparency, world modelling, energy efficient architectures, sustainability and transparency about data sources and energy consumption (including externalities) and UX improvements.
You’ve spoken about how GenAI models don’t always adapt well to different cultures. Why is this a problem?
GenAI models today are trained on predominantly Western language databases, with American English corpora particularly weighted in both data sources and training bias. Minority languages are underrepresented in training data. Research shows us that when models training in English are applied to minority languages the results are suboptimal. For languages already at risk, the ease of producing poor-quality, English-contaminated text will have outsized negative effects on minority languages. But even if you use GenAI exclusively in English, you will find cultural bias in how GenAI generates responses: both from the underlying training data sets and human trainers. For example, if you ask AI to draft a professional CV, it might sound perfectly normal in New York but feel arrogant or overblown in Lisbon or Tokyo. The cultural nuances of tone, politeness and directness are not universal, yet GenAI tends to assume they are because it reflects on the monolingual cultural bias of its designers.
So GenAI’s “intelligence” is shaped by who trains it and where?
Exactly. If you want AI-generated content to be effective, you need good quality data in sufficient quantities to train the model so that a culturally relevant next token prediction pattern emerges. I have been testing large language models for the ability to code-switching between languages and they typically fail. My hypothesis is that they do not have sufficient examples of the type of code-switching multilingual humans naturally do for a prediction pattern to emerge.
The real question for businesses (and individuals) is how authentic or generic you want your voice to be. It is easier to recognise language or cultural mistakes where we are expert users. So while it is exciting to take advantage of multimodal language translation capabilities to generate emails, voice messages and even podcasts in unfamiliar languages, I would exercise caution when a competent native speaker is not available to review the output to avoid embarrassment or worse.
Many executives talk about AI, but they don’t use it themselves. What’s your take on this?
If you are leading a business and are thinking of deploying AI, you must engage with it first-hand to understand what it can and cannot do. Experts and managers tend to do very well using AI as they can verify the outputs and have experience delegating tasks. The ideal sponsor for an AI enterprise is someone who uses GenAI and understands the potential and limitations of the technology for themselves.
What should leaders do differently?
The pace of change and FOMO-based marketing hype for better models can be overwhelming, focus instead on getting to know any one model better:
1. Experiment with GenAI regularly. Try different GenAI models, ask peers what GenAI tools they are using and which ones they get the most value from. Ask what happens with your data before sharing information you would rather not see in the public domain.
2. Understand where GenAI works and where it fails, in general, and specifically for your industry.
3. Think beyond cost cutting. GenAI leaders are getting value from business model innovation and top-line growth, not just productivity.
Without firsthand experience, leaders risk making poor AI decisions or missing its biggest opportunities.
Final Thoughts: The Call to Action for Business Leaders. Pilar, if you had one message for business leaders navigating AI today, what would it be?
✅ GenAI is powerful but it has limitations. It hallucinates, lacks common sense reasoning and requires human oversight for decision-making and accountability
✅ intelligence. GenAI is not really thinking. It is predicting a word pattern. Using Gen AI can enhance our cognitive processes or it can reduce certain skills if we become dependent. Each of us must decide what to delegate to GenAI and what to retain as human only in our thinking.
✅Leaders must engage with GenAI firsthand, understanding its strengths, weaknesses and business impact.