You’re frustrated. Two functional leaders are pulling you into a nasty turf war when you need them to collaborate. You’re writing a frustrated reply, when a friend stops you. They recommend more appropriate wording, and that you ask the functional leaders to schedule a meeting to discuss conflicting priorities and come up with a solution. You take the recommendation and cool off. You would like to reach out and thank your friend and confidante, but you can’t because they’re an AI. With the help of current artificial intelligence (AI) technologies, this—and many other social capabilities—may already be possible with the tools that many organizations have access to.
While 91% of business leaders surveyed in 2022 said they have an enterprisewide AI strategy, they are typically using AI in the workplace to generate insights, optimize processes, lower costs, improve collaboration across businesses, etc.1 Within the context of these applications, the potential for human-machine collaboration is well-established.2 However, the potential of AI to improve human-to-human relationships among the workforce or with customers and potential recruits—what we call the social side of work—can often be overlooked.
By analyzing interactions and communications and generating personalized, data-driven recommendations, AI can do much more than just promoting email diplomacy. It could be a powerful tool for the workforce to nurture uniquely human capabilities. AI can help us prepare for key presentations, expand our professional networks, understand the personalities and feelings of customers, promote diversity and inclusion in everyday work, and even drive innovation and culture change across an organization. Of course, such capabilities come with adoption challenges. Skepticism for this kind of AI can run deep. But a careful, user-centric, opt-in/-out approach can help overcome resistance, and gradually introduce employees to AI.
Beyond the tactical knowledge, expertise, and skills needed to do one’s job, there are enduring human capabilities that are universally applicable and harder to develop, such as emotional intelligence, teaming, and empathy.3 These capabilities enable workers to build meaningful relationships with customers, leaders, peers, and potential recruits. The value of these human-to-human relationships can be foundational and critical to organizational success.4
We surveyed 2,620 business leaders as part of Deloitte’s State of AI 2022 study. More than two-thirds of leaders noted that their organizations have either deployed or were developing AI applications for natural language processing (including sentiment detection and text summarization), computer vision, text chatbots, and voice agents. Additionally, less than a third were planning or exploring these technologies.5 Organizations are typically using these technologies to generate insights, optimize processes, lower costs, and improve collaboration across businesses. In addition to these applications, AI technologies can analyze human interactions during and after an event to generate personalized, confidential recommendations at an individual and organizational level to help improve human interactions at work.
There are multiple AI applications for the social side of work (figure 1).
1. Amplifying emotional intelligence through AI simulations, personal upskilling, and networking
Simulations: Affective computing, also known as emotion AI, is a constantly evolving application that understands human emotions in response to a situation and makes recommendations accordingly. Its real-world applications span across a few areas of communication.6
For example, before a meeting or presentation, leaders can practice interactions with AI avatars representing team members.7 Based on the narrative, AI would generate possible arguments, assess persuasiveness, and give feedback to make communication more effective.8
AI simulations can also be helpful when leaders are looking for inputs on early-stage thinking. For complex topics, leaders may first seek inputs from AI, then review with peers and leaders at later stages when the thinking is more developed to save time.
Upskilling at scale: Traditionally, coaching has largely been made available to select professionals in an organization—either high performers or individuals experiencing performance issues that require direct interaction and intervention. This leaves out much of the workforce. AI can enable learning experiences tailored to an individual’s emotional intelligence needs and drive those learning experiences at scale.9 In one example, a coaching network uses AI and machine learning algorithms to match employees with coaches focused on different skills categories related to inclusive leadership, persuasive communication, etc.10
Networking: AI-enabled applications can connect professionals with other people who have similar interests and help them grow their professional networks within and outside of their organization. Users provide information about their professional background, industry or sector specialization, areas of interest, etc., to a model that can generate matches periodically, send introductory emails, and set up meetings. These interactions can drive virtual watercooler or coffee bar conversations in the hybrid work environment. On similar lines, AI-enabled platforms can also facilitate experience/expertise-based networking outside of the organization.11
2. Understanding customers better and providing superior customer service
Contact centers have been early adopters of automated voice systems to address higher call volumes, with labor shortages and lower IT budgets.12 However, the endless loops of automated responses can often lead to customer alienation, making this a much less popular and often-derided use of automation. AI can not only drive automation, but it can also make each customer touchpoint meaningful—while reducing the need for 24/7 human involvement. By analyzing data from past conversations, AI can help contact center representatives with insights to prepare a baseline customer profile before an interaction, enable them to perform well during the interaction, and help them update the customer profile based on the interaction to generate recommendations for future use.
Getting to know your customers before meeting them. Past customer interactions are a gold mine for deriving customer insights. AI tools can ingest basic customer data and previous conversations to create their profile based on their communication style, personal priorities, responses in previous conversations, etc. Contact center representatives can review this profile before engaging with a customer and be better prepared to have a seamless conversation.13 AI can also identify the most appropriate service representative for a customer based on similarities in personalities and communication styles.14
In one example, Vodafone Italy combined customers’ profile data with a customized language generator algorithm, and developed personalized promotional messages for each customer segment for plan upgrades, 5G launch, etc. The effort resulted in increasing customer subscriptions by 40% in 2020.15
Secondly, engaging effectively during a customer interaction. While engaging with customers, virtual agents or chatbots can conduct a real-time sentiment analysis of the conversation. The bot can then adjust its response based on the results of the sentiment analysis; for instance, if a customer interaction has a positive sentiment, the bot can pitch for cross-selling or upselling. And if the customer interaction has a negative sentiment, the bot can quickly transfer the call to a contact center representative with notes about the interaction, enabling the representative to take it forward.16
Even when representatives are interacting with customers, AI programs can monitor the interaction in real-time and provide suggestions to the representatives (through text prompts) on how to respond.17 Humana Pharmacy uses voice analytics in its call centers.18 Voice signals can be analyzed to determine customer engagement and provide real-time feedback to contact center employees during the calls, allowing them to amend their approach accordingly.19
The conversational AI solution should be sophisticated enough and be able to combine language semantics with voice tonality to understand the customer’s emotion correctly. For instance, a user (in a stable and flat voice) says, “I’m really surprised that you still haven’t managed to provide a resolution.” While the tone doesn’t show anger or frustration, some words, such as really surprised or haven’t managed, when spoken with longer-than-normal pauses, could indicate a negative emotion. The application should be able to pick up these nuances to generate effective advice for customer service representatives.
As conversational AI continues to learn and improve over time, benefits can be significant. One study involving 445 businesses across industries using AI solutions for contact center service reported 2.2 times higher first-call resolution rates and 4.5 times greater service-level agreement (SLA) attainment rates, compared to non-AI users.20
Finally, deriving insights after a customer interaction for future use. AI applications can analyze interactions with customers to update customer profiles, enable service professionals to improve their pitches, and also reassess the pairing between customer service representatives and customers based on similarities in personalities and communication styles for future interactions.
As the customer service use case shows, AI has the ability to automate processes (that have traditionally been done by humans) with a “human touch” and free up time for humans to take up higher-quality work. This use case also illustrates that worker data can be used to not only draw meaningful insights but also create a better work experience and, as such, can be mutually beneficial for the organization and the workforce.
3. Recruiting a diverse workforce and building diverse project teams
AI and data-based algorithms can provide visibility into whether the organization is truly diverse. By analyzing the profile of the workforce, AI can assess diversity (race, gender, ethnicity, etc.) and monitor it in real-time across functions, career levels, and other criteria.
AI can also help attract diverse new talent in many ways, including:
In the workplace, AI can enable blind hiring by stripping away identifiable attributes from resumes that are typically not related to candidates’ skills, expertise, or experience. By removing attributes such as name, age, headshot, gender, race, or ethnicity from resumes before they reach hiring managers, AI can reduce human biases and help drive a more equitable recruitment process.
Project teams may be more amenable to work assigned by AI compared to that assigned by their managers in some cases. Team members are likely to be more trusting toward AI when they are looking for quick and unbiased information, logic-driven solutions, confidential responses without the fear of scrutiny or retaliation, etc.25 By integrating AI in day-to-day workflows and allocations, managers can improve trust with their team members.
4. Fostering an inclusive work environment
Diversity without inclusion is insufficient. AI can enable the workforce to drive respectful conversations and inclusive workflows that are critical—especially in hybrid and remote work environments.
AI can drive inclusion and accessibility in meetings, including:
5. Leveraging “informal” networks to drive change management and innovation
Leaders typically use a formal hierarchy and top-down communication to disseminate culture and values within the organization. However, they often face challenges in gaining acceptance through such channels. Not everyone who represents a box on an organizational chart is more influential than those that flow below them. The truth is that influencers can fall all over an organizational chart, but we tend to prioritize hierarchy over influence. In reality though, workforce behaviors and culture change happen in the organizational network (figure 2).28
By using technologies such as text mining, natural language processing, etc., organizations can analyze who is connected to whom, the nature of their interactions and relationships, and informal influencers within the organization in a systematic and scientific way. Data-driven analysis of responses to surveys, focus group discussions, interviews, etc., can highlight reasons for workforce hesitancy toward proposed changes and the degree of resistance. It can determine who is “on the fence” versus opposed to change. When leaders understand the reasons and degree of hesitance, they’re often better equipped to formulate potential actions to address that hesitance, and drive acceptance and change management with the help of influencers.29
Informal influencers can also be helpful in driving innovation within the organization as they can mobilize individuals and groups to facilitate the flow of ideas and information within the organization. By analyzing the connections between employees, General Motors identified “influencers” from different teams and functions who could drive innovative ideas for product design and customer service.30 Then, they created an environment to develop the ideas by onboarding additional people interested in building the solutions and driving wider adoption across the organization.
Applications of social AI will likely face many of the same challenges as other AI applications—concerns about lack of explainability in AI decisions and risks associated with data privacy, trust, reliability, etc.31
We discuss below some of the key elements that organizations should consider integrating when developing and implementing social AI solutions.32 These elements can address some of the challenges and can help create better work for humans and better humans for work.
1. Training the social AI model to generate impartial recommendations for building workforce trust
The training dataset for the social AI algorithm should be chosen to ensure a fair representation of the population and mitigate biases resulting from human inputs. Also, it’s important to ensure that recommendations (for improvements in communication, workflows, etc.) are not influenced or biased by career levels, e.g., junior professionals need more training on inclusive communication. Further, the application should not only offer corrective action on communications and interactions but also convey appreciation and admiration when employees adapt their behavior to the recommendations and improve the quality of their interactions.
2. Defining responsibility and accountability for the social AI solution and the workforce
In recent years, there has been much discussion about whether AI should be held to machine or human standards—both ethically and legally.33 And, who should be held responsible and accountable for a decision: AI or the person who created or deployed it?
It’s important to establish responsibility and accountability in conversations and interactions among AI and human users. When doing so, social AI cannot be considered in isolation. It’s a part of an organization’s overall ethics policy, and human users remain integral throughout the AI loop. Let’s consider an example where a social AI application provides a recommendation about language choice to an employee or a suitable composition for a team to a project manager. In this case, the responsibility to generate the most appropriate recommendation lies with the developer teams; however, the accountability to act on that recommendation rests with the user, i.e., the human workforce. It’s imperative that this understanding of responsibility and accountability is documented and communicated to the developers as well as end users.
3. Defining the purpose of social AI clearly to drive data privacy
During one of our research interviews, an AI specialist who focuses on AI/machine learning product management said, “The topmost challenge is privacy … users freak out when they learn that their data is being collected … they feel, ‘I am being monitored, and my behavior will be distributed where I don’t have control.’”34
One way to alleviate privacy concerns is to ensure that the user data isn’t used for evaluative purposes. In other words, don’t use AI to rate your workforce’s emotional intelligence for performance reviews. Also, the application should seek permission to use the workforce data for each purpose (analyzing team conversations, sales pitches, customer support calls, etc.), and there shouldn’t be blanket consent from the workforce for the deployment of multiple social technologies.
Depending on the AI’s purpose, there may or may not be a need to store the data. In a simple example of turn-taking and analyzing airtime in a multiperson conversation, the data is useful in the moment to allow everyone to contribute to the discussion, and it can be deleted after the conversation. In other applications, such as improving contact center conversations, data may need to be stored for future training and improvement purposes.
Conversational AI should replicate the trust and discretion that is integral to human-to-human conversations. As we share information with other individuals, there is an unsaid understanding that the listener will exercise discretion when sharing that information with other individuals. Likewise, as social AI systems interact with other human users (say peers or customers) on behalf of the workforce, they must only share what the user is comfortable sharing with other parties. For instance, an AI database has a user’s full date of birth—but when another human user or AI bot requests this information, the system uses discretion and only shares the day and month but not the year, thus, moving the conversation forward while keeping the data safe.
4. Securing social AI models by design
The European Union Agency for Cybersecurity (ENISA), the Federal Trade Commission (FTC) in the United States, and other organizations globally, have outlined cybersecurity frameworks to assess the exposure level of an AI model to cyberthreats.35 Organizations should test their social AI models against these security frameworks periodically to check for vulnerabilities to existing and emerging threats and deploy appropriate security controls.
When developers are building the social AI training data, they could ingest harmful content into the training dataset, for example, malicious content that tries to access and edit a user’s data or the complete dataset. This could help in training the algorithms to identify abnormal behaviors compared to normal user patterns and restrict further user activity, even leading up to denial of service as required.36
5. Deploying transparent social AI models with explainable decision-making
The workforce should be able to see how their data feeds into the social AI algorithm, how the algorithm makes decisions, and how it would benefit them. The algorithm should be open to inspection and corrections as required. For example, if AI recommends that someone modify their tone, it should also provide a decision tree explaining why something is appropriate or not based on organizational guidance on language nuances.
IBM provides factsheets for each AI model that contain information about the creation and deployment of a model throughout the life cycle. End users can review the data captured and how it moves through the AI life cycle to determine the model’s decision-making process.37 Consumers trust food nutrition labels because it enables them to decide whether to purchase and consume an item. Social AI factsheets may drive transparency and trust with the workforce the way food labels do.
6. Maintaining social AI models’ robustness and reliability over time
When social AI systems can learn from users and each other, they can produce reliable results and, over time, build trust with users. Human intervention may be required to ensure the model is and stays robust. Teams need to identify the right people to provide human input. Have they received training on company guidelines and policies, and are they equipped to take on this responsibility? It’s important to identify periodic refresher trainings on bias mitigation and ethics for those involved to keep the solution robust over time.
In Deloitte’s survey of business leaders conducted in 2022, 76% said they plan to increase or significantly increase their organizational spending on AI in the next year.38 In addition to the established uses of AI in the workplace for making internal processes more efficient and generating data insights, leaders have the untapped opportunity to leverage AI to enhance the social side of work. Here are some actions to consider to get started.
Define social AI use cases and establish value metrics. Define what constitutes a social AI use or interaction, so you know how to set metrics and measure them. Identify value capture for each social AI application (increase in contact center resolution rates, higher employee engagement, improved acceptance of new processes, etc.). Measure value both in terms of breadth and depth. Breadth can be assessed by looking at how far-reaching the impact of the social AI solution is. Is it compartmentalized to select functions within the organization or across the organization? Is the impact within the organization or outside as well with external stakeholders such as customers, potential recruits, etc.? Depth can be assessed by looking at whether the social AI application is simply improving existing processes or establishing new trustworthy processes, thereby reinventing work practices.
Make the workforce comfortable with social AI. It is a huge shift for the workforce to trust a machine socially—people have to get comfortable holding a mirror up to their development areas. Leaders and managers have the responsibility to enroll the workforce with the idea that the use of their data is mutually beneficial for them and the organization. It often starts with letting the workforce know how their data will be used, giving them a “trial period” to evaluate the application, and an opt-in/-out ability at any point in time. Also, professionals tend to prefer to take “recommendations” from AI—not instructions. As such, it’s important to make it clear in the social AI user interface that the application is playing the role of a coach or buddy and not that of a gatekeeper or enforcer.
Identify how the workforce would like to engage with social AI considering cross-cultural differences. Begin by identifying workforce needs for teaming, relationship-building, networking, etc., and assess where AI solutions can be implemented to address current problems or uncover value-creation opportunities. There may be cross-cultural differences in social AI deployments for a globally dispersed workforce. For instance, based on a survey of 1,015 respondents from 48 countries, respondents from East Asia are more likely to have a trusting attitude toward emotion AI compared to respondents from western countries. This could require leaders to develop location-specific strategies for their global teams.39
Build a custom solution suited to your organization’s social nuances. When implementing a solution, it’s important to work closely (as a partner) with the AI solution provider. Since every organization is different in terms of its processes, communication styles, work dynamics, etc., it’s important to deploy a solution that is customized to the needs of the organization and the unique needs of different functions within the organization (sales, customer support, human resources, learning and development, etc.). Also, it’s important to have the right training dataset to train AI models; some of the training datasets should come from the organization’s actual data to keep the model close to reality and ensure that the model keeps adapting to incoming data.
Pilot the social AI solution for internal conversations, incorporate feedback, then scale to external applications. Pilot the solution with conversations and interactions within the organization (among the workforce) and build feedback loops from the workforce before scaling the solution to external interactions (with potential recruits, customers, etc.). While scaling the solution, a transfer-learning approach may be helpful. For example, when a team is developing a microaggression detector algorithm, they will have to train the model on hours of audio inputs, which would be time- and cost-expensive. Instead, the development team can use pretrained models (used elsewhere in the organization) or external open-source models and adapt them to their needs. When using an external open-source dataset, make sure to check that it is diverse to train your model well.
Time is short—seize the opportunity. There is a confluence of cost and performance improvements in enabling technologies (such as cloud, network speeds, computer vision, and language recognition) that could make it opportune for organizations to implement social AI now.40 AI is a powerful tool in leaders’ arsenals. With it, they can drive efficiency by creating leaner and simpler organizations and enhance unique human capabilities for long-term organizational success. By driving greater trust and transparency in hybrid operations, AI can improve the quality of work, increase employee engagement, and reduce attrition. As such, organizations adopting a wait-and-watch approach may run the risk of losing competitive advantage in the current race for talent.
Deloitte’s Organizational Strategy, Design, and Transition services help clients optimize their organizational structure and decision-making. Deloitte’s organization strategies method addresses several critical subject areas, such as organizational assessment, organization and role design, decision rights and governance, and workforce transition.