What the Spanish verbs for “knowing” can tell us about the modern workplace and AI
Sometimes language itself shapes the way we understand concepts and the world around us. Having or not having words to describe something changes how we think and how we express ourselves. Famously, in Orwell’s 1984, the whole aim of Newspeak is to “make thoughtcrime literally impossible, because there will be no words in which to express it.” A less sinister example is the many words Scottish Gaelic speakers can use to describe different types of rain.
So, when we’re considering the modern workplace and “knowledge work”, exploring the fact that some other languages have more than one word to describe the “way we know things” (as opposed to just “to know” in English) could yield interesting and novel perspectives.
For the purposes of this article, I’m going to look at Spanish, which, like German, has two different verbs for “to know”. These represent two entirely different ways of knowing the world, which, for lack of division, we often conflate in English.
These different words are:
Saber - to know a fact or how to do something. This is knowing in the abstract and is consistent for everyone who knows it. For example: I am 6ft tall and have brown (okay, grey) hair.
Conocer - to be familiar with some place, person or thing. This form of knowing comes from experience of the thing you know. It’s unique to you because part of the knowledge is the thing you know, but part of it is what you bring to that relationship. For example: I know Alex, but I know him completely differently to the way his brother knows him.
For example: “knowing” (conocer) your neighbourhood from the experience of living there is completely different to “knowing” (saber) it from looking at Google Maps.
Why this matters
Okay, so what? Why does this matter? And why is it relevant to you?
I believe that part of the problem with modern working practice is that we’ve become reliant on the first form of knowing - saber - and have neglected that second type of knowing - conocer.
This is problematic because it favours surface level, information-based knowledge at the expense of deep understanding and contextual awareness. This shift both disempowers and disengages us but also means that the quality of our work is lower. We lose the flexibility and adaptability that are vital in a rapidly changing world.
Without careful consideration, AI will make this trend worse because AI doesn’t have experiential, emotional or social context. It only has information-based knowledge, so it cannot understand in the same way a human can.
Using checklists and recipes instead of judgement
A simple example of this dynamic in action is where we become over reliant on codification and process to get things done.
Processes are important. There’s a reason that everyone in an operating theatre has to verbally confirm what procedure they think they’re doing before they start. Processes help prevent silly mistakes.
But when we use them instead of good judgement, rather than to complement good judgement, they can have a series of negative impacts.
To illustrate this, consider a checklist for checking a webpage before it goes live. If your team only use the checklist to assess the quality of the page you will:
Have a brittle list that doesn’t account for novel types of pages or functionality
Simultaneously disempower people doing the checking, to the extent that they don’t use their judgement when dealing with that novelty
In the process, if you rely on the process as indicating what a “good” website looks like, your team will never gain the understanding of what makes a good webpage that comes from experience
A parallel to this from the non-work world is the displacement of the conocer format of cooking knowledge (judgement) with the saber format (recipes).
My Spanish wife’s abuela was perpetually stumped when asked for a recipe for her meals and would often resort to “cook this until it’s ready” or “add just enough of that”.
Don’t get me wrong, processes and checklists can be important but must be used in the right way. They should complement judgement, not replace it. Building that understanding and judgement by giving people experience of what good and bad look like is a much more effective way of doing good work. A reliance on recipes is a poor way to learn how to cook because they perpetuate the loss of that judgement, and the same is true of processes in the workplace.
Video calls and in-person meetings
Another place we can see the replacement of implicit knowledge with explicit data is the increasing use of video calls that hybrid work demands. This has diluted our knowledge that comes from experiencing someone (see The Trauma of Zoom for an excellent exposition of this) whilst at the same time increasing asynchronous communication like Trello cards that give us less emotional context. As a result, the number of meetings has mushroomed whilst their quality has decreased. Last year Microsoft published data on the use of Microsoft 365 that showed that people on average spend 57% of their time communicating (email and Teams) and are finding it exhausting.
AI can only know the saber, not the conocer
Without judicious application, AI is likely to take this trend to another level. In the same paper, Microsoft promoted the idea of meetings as a “digital artefact” where people can use an AI-powered chat interface (that parses a transcript) to find out if they’ve been mentioned in a meeting or to catch up on what’s happening in the meeting. This is the archetype of favouring abstract knowledge over knowledge from experience, replacing meaningful interaction with data about the interaction.
In fact, using AI to summarise anything is fraught with risk because AI cannot experience it as you do. The summary will be like a map: useful to an extent but stripped of the context which adds the most value. For example: a summary of a meeting transcript or an email thread will not include the social and emotional context that is vital for a proper understanding of the real meaning of the meeting*. In addition, using a summary instead of actually attending the meeting or reading the thread weakens your relationships with your colleagues who were present.
If someone’s experience of a meeting can be replaced by a chatbot, the chances are the meeting is poorly designed or unnecessary. The same is true of using AI to summarise long email or Slack threads. The solution is to find better ways to communicate, not to use AI to make bad communication slightly more digestible at the expense of its context and meaning.
What can we do about it?
Both Microsoft and Google's workspace AI integrations (Copilot and Duet) seem to be built with the well-meaning intention of using AI to resolve the glaringly obvious problems with modern work. Which is great if you’re selling AI subscriptions but will, I fear, only exacerbate these problems.
Instead, organisations need to be thinking about how to create the conditions for people to know through experience, and use this understanding to do their work.
Leaders can do this in a few ways:
Design better meetings that are focused on collaboration and discussion
Use asynchronous communication for non-urgent updates
Find a format for urgent communication (for example, phone calls) that removes any expectation that people will immediately respond to Slack messages. This creates space for focus and depth that is important for conocer.
Think about how your teams and work are structured: if the people closest to the work often get overruled, do they feel like they need to stick to a recipe rather than following their judgement?
Similarly, think about how your team grows and develops. Are you supporting people in developing an understanding of the work they do in their job or are they expected to just follow rules and procedures?
Think about the relationships in your teams and between people in other teams. How are you encouraging people to become familiar with each other as people, not just profiles on Slack or 2-D heads on a Zoom call?
This can all feel messier: using judgement and building relationships is more difficult than checklists and Trello cards. On paper, someone cooking using their instincts and experience seems less reliable than someone following a recipe. But in reality that’s where the real value lies, both for people’s experience of work, and the quality of their work itself.
*Interestingly, there is a school of thought that says it's fundamentally impossible for AI to achieve “Artificial General Intelligence” that can rival humans because they “have no body, no childhood and no cultural practice” and they aren’t in the world in the same way that people are. See this Nature article.