Every aspect of an organization disrupted by technology represents an opportunity to gain or lose stakeholders' trust. Leaders are approaching trust not as a compliance or PR issue but as a business-critical goal.
A common refrain in Deloitte’s Tech Trends reports is that every company is now a technology company. With the advent of digital technology, businesses have been asking customers to trust them in new and deeper ways, from asking for personal information to tracking online behavior through digital breadcrumbs. At the same time, headlines regularly chronicle technology-based issues such as security hacks, inappropriate or illegal surveillance, misuse of personal data, spread of misinformation, algorithmic bias, and lack of transparency. The distrust these incidents breed in stakeholders—whether customers, employees, partners, investors, or regulators—can significantly damage an organization’s reputation.1 Indeed, consumer trust in commercial enterprises is declining, citizens are becoming wary of public institutions, and workers are asking employers to explicitly state their core values.2
In what we recognize as an emerging trend, some companies are approaching trust not as a compliance or public relations issue but as a business-critical goal to be pursued—one that can differentiate them in an increasingly complex and overfilled market. As discussed in Deloitte’s 2020 Global Marketing Trends report, brand trust is more important than ever for businesses—and it’s all-encompassing. Customers, regulators, and the media expect brands to be open, honest, and consistent across all aspects of their business, from products and promotions to workforce culture and partner relationships.3
Every aspect of a company that is disrupted by technology represents an opportunity to gain or lose trust with customers, employees, partners, investors, and/or regulators. Leaders who embed organizational values and the principles of ethical technology across their organizations are demonstrating a commitment to “doing good” that can build a long-term foundation of trust with stakeholders. In this light, trust becomes a 360-degree undertaking to help ensure that an organization’s technology, processes, and people are working in concert to maintain that foundation.
As the adage reminds us, trust is hard to gain and easy to lose.
The term ethical technology refers to an overarching set of values that is not limited to or focused on any one technology, instead addressing the organization’s approach to its use of technologies as a whole and the ways in which they are deployed to drive business strategy and operations.4 Companies should consider proactively evaluating how they can use technology in ways that are aligned with their fundamental purpose and core values.
Ethical technology policies do not replace general compliance or business ethics, but they should all connect in some way. Just as your approach to cybersecurity hasn’t taken the place of your company’s more general privacy policies, your ethical technology approach should complement your overall approach to ethics and serve as its logical extension in the digital realm. Some companies are expanding the mission of existing ethics, learning, and inclusion to include ethical technology, while maintaining separate technology ethics programs. Doing so helps keep technology ethics top of mind across the organization and encourages executives to consider the distinctions between technology-related ethical issues and broader corporate and professional ethics concerns.
The fifth annual study of digital business by MIT Sloan Management Review and Deloitte found that just 35 percent of respondents believe their organization’s leaders spend enough time thinking about and communicating the impact of digital initiatives on society. While respondents from digitally maturing companies are the most likely to say their leaders are doing enough, even then, the percentage barely breaks into a majority, at 57 percent.5
These findings suggest that organizations still have significant room to step into the lead. Those companies that develop an ethical technology mindset—demonstrating a commitment to ethical decision-making and promoting a culture that supports it—have an opportunity to earn the trust of their stakeholders.
In the digital era, trust is a complex issue fraught with myriad existential threats to the enterprise. And while disruptive technologies are often viewed as vehicles for exponential growth, tech alone can’t build long-term trust. For this reason, leading organizations are taking a 360-degree approach to maintain the high level of trust their stakeholders expect.
Artificial intelligence (AI), machine learning, blockchain, digital reality, and other emerging technologies are integrating into our everyday lives more quickly and deeply than ever. How can businesses create trust with the technologies their customers, partners, and employees are using?
A strong foundation for ethical technology and trust will be shaped by the principles of an organization’s leaders and realized in business processes.
Since technology is arguably used by most if not all individuals within an organization, ethical technology and trust is a topic that touches everyone.
Companies that don’t consider technology to be their core business may assume that these considerations are largely irrelevant. In truth, no matter the industry or geography, most organizations are increasingly reliant on advanced digital and physical technologies to run their day-to-day operations.
While there is so much emphasis on the challenges disruptive technologies bring and the existential threats to an organization’s reputation when technology isn’t handled correctly—whether through misfeasance or malfeasance—these same disruptive technologies can be used to increase transparency, harden security, boost data privacy, and ultimately bolster an organization’s position of trust.
For example, organizations can pivot personalization algorithms to provide relevant recommendations based on circumstance—for example, offer an umbrella on a rainy day rather than an umbrella after someone buys a raincoat. By focusing on relevance rather than personalization, AI recommendations are likely to seem more helpful than invasive.23
Deloitte surveys have found a positive correlation between organizations that strongly consider the ethics of Industry 4.0 technologies and company growth rates. For instance, in organizations that are witnessing low growth (up to 5 percent), only 27 percent of the respondents indicated that they are strongly considering the ethical ramifications of these technologies. By contrast, 55 percent of the respondents from companies growing at a rate of 10 percent or more are highly concerned about ethical considerations.24
After all, the pursuit of trust is not just a 360-degree challenge. It is also a 360-degree opportunity.
Disruptions in the health care industry—including new care delivery models, consumer demand for digital experiences, declining reimbursements, and growing regulatory pressures—are driving many health care organizations to use technology to improve efficiency, cut costs, and improve patient care. And there could be an inadvertent benefit: Technology could help health care systems build trust with patients and providers.
Providence St. Joseph Health (PSJH) is leveraging technology to adhere to its mission of improving the health of underprivileged and underserved populations, says B.J. Moore, CIO of PSJH.25 Technology is helping the Catholic not-for-profit health system simplify complex experiences to enhance caregiver and patient interactions, modernize the operating environment and business processes, and innovate with cloud, data analytics, AI, and other technologies to help improve patient care.
In the process, PSJH is building trust. For example, the organization is collaborating with technology partners to standardize cloud platforms and productivity and collaboration tools across its 51 hospitals and 1,085 clinics, a move that will improve provider and patient engagement and enable data-driven clinical and operational decision-making. It also aims to develop the first blockchain-powered integrated provider-payer claims processing system. Such technological breakthroughs can increase trust—but careless deployment and negligence can quickly erode it. That’s why Moore has doubled down on establishing and maintaining a solid technology foundation for innovation and, by extension, trust. “Technology holds so much promise for helping patients at scale,” he says. “But it also has the potential to cause damage at scale.”
For example, data analytics, AI, and machine learning can help researchers and clinicians predict chronic disease risk and arrange early interventions, monitor patient symptoms and receive alerts if interventions are needed, estimate patient costs more accurately, reduce unnecessary care, and allocate personnel and resources more efficiently. When patients understand these benefits, they’re generally willing to share their personal and health information with care providers. But their trust could diminish—or vanish—if weak data security or governance protocols were to result in a data breach or unauthorized use of private health information. This could cause patients to conceal information from care professionals, lose confidence in diagnoses, or ignore treatment recommendations.
A number of industry regulations help ensure patient privacy and safety, and PSJH has another effective governance and oversight mechanism: a council of sponsors, consisting of clergy and laypeople, that holds moral accountability for PSJH’s actions in service of its mission. Sponsors help develop guidelines that ensure adherence to mission and values and advise the organization’s executive leadership and board of trustees on trust-related technology matters, such as the ethical use of data and the impact of technology on employees and caregivers.
“We’re continuously working to raise awareness of technology’s role in improving health,” Moore says. “Educating and communicating with patients, care professionals, regulatory bodies, and other key stakeholders can help prevent potential barriers to rapid experimentation and innovation and allow us—and our patients—to fully experience the benefits of technology.”
CIBC is using technology to understand and anticipate individual client needs with the goal of delivering highly personalized experiences—an initiative they call Clientnomics™. Terry Hickey,26 CIBC’s chief analytics officer, recognized that AI-based algorithms could deliver the client insights required to drive Clientnomics but that to be successful, leaders needed to understand and share with employees how AI will complement and support the work they’re doing, versus replacing their jobs. The bank also needed to maintain clients’ trust by protecting their data and governing its use.
In early 2019, leaders from the bank’s analytics, risk, and corporate strategy teams collaborated to develop an organizationwide AI strategy, which CIBC’s senior executive committee and board of directors approved. At the heart of the strategy are guiding principles that address questions such as: When will we use the technology? When will we not use it? How do we ensure that that we have our clients’ permission?
To reinforce employee trust, the strategic plan stated that a primary purpose of AI would be to augment employees’ capabilities to achieve company goals. Leaders agreed to focus on funding AI use cases that support employees in their roles and improve practices that aren’t currently optimized.
With the strategy in place, the next step was to build an AI governance process to ensure that new technology projects comply with the strategy and guiding principles. When a new project is proposed, stakeholders answer a series of questions that help them plan and document what they want to accomplish. These questions cover a broad range of ethical considerations, including project goals, possible inherent biases, and client permissions. Approved project documents are stored in a centralized library that regulators, internal auditors, and other reviewers can reference to explain the thought process behind the algorithm or model.
CIBC has also developed advanced analytic techniques to help govern its use of data—for instance, encoding client data in a way that it cannot be reverse-engineered to identify an individual. The analytics team also devised a way to assign a data veracity score—based on data quality and integrity, possible bias, ambiguity, timeliness, and relevance—to each piece of information that could be used by an algorithm. The algorithmic models are designed to recognize and treat the data veracity appropriately, supporting more reliable, trustworthy, and engaging interactions.
As the analytics team launches Clientnomics, members are focused on developing customized AI-supported client experiences rather than large-scale technology projects. So far, they have accumulated 147 use cases, completing 40 in the first year.
For example, when a client calls CIBC’s contact center, a predictive model dynamically configures the interactive voice response menu based on the client’s recent transactions and offers the most relevant information at the top of the menu. The bank aims to cement client relationships over time with a continuous string of personalized interactions.
“In my previous role,” Hickey says, “I spent a lot of time with organizations around the world. Everyone talked about the benefits and future potential of AI, and some completed proofs-of-concept, but few were able to implement them, especially in banking and finance. By proactively addressing how we will—and will not—use technology, CIBC has embraced the positive benefits it can deliver to employees and clients. All of this in less than a year.”
In the health care industry, trust is a primary driver of patient behavior: Trusted organizations have an edge in influencing behaviors that can create more positive health outcomes. For 130-year-old global health care company Abbott, trust is top of mind as it evolves and expands its portfolio of diagnostic products, medical devices, nutritionals, and branded generic medicines, says CMO Melissa Brotz.27
With technology-driven products such as sensor-based glucose monitoring systems, smartphone-connected insertable cardiac monitors, and cloud-connected implantable defibrillators and pacemakers, Abbott takes a multifaceted approach to trust, adds CIO Mark Murphy.28 Across the enterprise and its connected technologies, this includes comprehensive data protection policies, employee training programs, and an external ecosystem of trust-based partners, and other components.
For example, Abbott is exploring multiple data-enabled opportunities to improve health care, such as a machine learning solution that combines performance data from the company’s diagnostics platforms with global clinical and patient demographic data to help health care providers diagnose heart attacks.29 To safeguard patient data and privacy—a core facet of trust—Abbott has enacted a number of enterprisewide policies, procedures, and annual employee training and certification programs related to data handling and protection and compliance with national and global regulatory mandates. Leaders have also made significant investments in cybersecurity capabilities and controls embedded into product designs, which is increasingly critical for a company such as Abbott, with products and services that are heavily connected and integrated—often with other products, systems, and apps.
In addition, ensuring patient trust is a responsibility that falls to each of Abbott’s 103,000 employees, from the board of directors and C-suite leadership to researchers, product designers, and engineers. Company leadership, for instance, is involved in data and product security oversight groups and board subcommittees, while employees participate in rigorous education programs on the implications of data privacy, security, and transparency. “Abbott is focused on helping people live better, healthier lives,” Murphy notes. “Often, technology is the enabler to help us do that, but it always starts with the patient. We know that when we build technology, we are doing so on behalf of the person who wears it, accesses it, or lives with it inside their body. And that means we must protect it—securely and responsibly.”
Abbott also relies on a strong external ecosystem to maintain patient trust. Independent third parties and research groups test Abbott’s products and services and assess their vulnerabilities on an ongoing basis. For example, the company is part of the #WeHeartHackers initiative, a collaboration between the medical device and security research communities that seeks to improve medical device security. At a recent event, Abbott teamed with university researchers to build a mock immersive hospital that enabled researchers to practice cybersecurity defense techniques.30
Rounding out Abbott’s trust ecosystem are patients and care providers themselves. To learn what concepts such as trust, security, and privacy mean to the different users of its products and services, the company regularly holds focus groups with them and produces educational material to raise awareness of these issues.
Ultimately, Brotz says, data-enabled technologies that help people live better lives are an extension of the lifesaving products and services that patients and their care providers have trusted for 130 years. “Patients place the highest levels of trust in us, and we take it very seriously,” she says. “It’s part of our DNA. Our greatest responsibility is to keep them and their data safe and secure.”
Because a company’s approach to technology directly affects stakeholder trust in its brand, businesses that are leveraging advanced technologies can benefit from considering the technologies’ impact on ecosystem partners, employees, customers, and other key stakeholders. Strong security controls and practices are foundational elements for building and maintaining stakeholder trust. Recognizing the impact of security breaches on customer trust, Google went beyond the expected table stakes by completely redesigning its security model to protect enterprise systems and data.
A decade ago, as Google moved internal applications and resources to the cloud, its security perimeter was constantly expanding and changing, complicating the defense of its network perimeter. At the same time, companies were seeing more sophisticated attacks by nation-state-sponsored hackers, testing the limits of the perimeter-based model of security. Hence, Google decided to completely overhaul its security approach and implement a new security model that turned the existing industry standard on its head, says Sampath Srinivas, Google product management director for information security.31
Google security experts could no longer assume that walling off the network would provide the security required to maintain system integrity and customer trust. They sought to reinvent the company’s existing security architecture, since the traditional castle-and-moat model—based on a secure network perimeter with VPN-based employee access to internal applications—was no longer adequate. The goal: to ensure that employees could use any corporate application from any location on any device as easily as if they were using Gmail and as safely as if they were in a Google office.
Google embraced the zero-trust concept, an innovative security model that eliminates network-based trust, Srinivas says, instead applying access controls to applications based on user identity and the status of their devices, regardless of their network location.
Google’s zero-trust security strategy treats every single network request as if it came from the internet. It applies context-aware access policies to clues such as user identity, device attributes, session information, IP address, and context of the access request itself, collected in real time by a device inventory service. A globally distributed reverse proxy server protects the target server, encrypts traffic to protect data in transmission, and acts as a sophisticated rules engine that determines access rights based on the user and device’s context, such as whether it is fully patched. Every access request is subject to authentication, authorization, and encryption. To protect against phishing, the company—working with the industry in the FIDO Alliance standards organization—developed and deployed a new form of cryptographic hardware two-factor authentication called Security Keys.32
Today, Google’s user- and device-centric security workflow allows authorized users to securely work from an untrusted network without the use of a VPN. The user experiences internal applications as if they were directly on the internet. Employees, contractors, and other users can have a seamless user-access experience from any location by simply typing in a web address—a process that dramatically reduces support burden. “To deliver on our goal of maintaining customer privacy and trust, we had to look beyond the status quo solutions, innovate, and take risks,” Srinivas says. “When we broke with tradition and changed the way we thought about our security infrastructure, it allowed us to develop a more effective way to protect data and systems.”
When I speak with leaders in the corporate world, they often ask for advice on how to build a brand that customers and employees trust. As we talk, I find that some haven’t carefully thought about what they mean by “trust.” Some define it subjectively, like a warm fuzzy feeling. At the other end of the spectrum, others assume that if a customer is willing to use a service or product, that action alone implies trust. I believe that neither of these definitions is complete nor accurate.
To me, trust is a willingness to make yourself vulnerable because you expect the broader system to act in ways that support your values and interests. That doesn’t mean that you expect the company will never make a mistake or experience an unintended outcome. Instead, what’s important is that if something goes wrong, you’re confident that the company will take care of it.
This definition applies even if a company’s product isn’t 100 percent reliable. For example, I’m more likely to buy from a company I trust, even if its product is occasionally unreliable, because I’m confident that if something goes wrong, the company will take care of me and my interests. I’m less likely to buy from a company offering a highly reliable product if I’m concerned that if the unexpected happens, I’ll be left to deal with the consequences on my own.
So how should corporate leaders approach trust? The first step is to think through the relevant values and interests of both the company and its stakeholders. What are the things that matter to customers, users, employees, and shareholders? This question supports a discussion about how the product or service could advance, protect, or impair those stakeholder groups.
The second step is related to design. How can the organization design a product or service that supports or endorses those relevant values? This is where ethics comes in. From my perspective, ethics is about asking two questions: What values should we have? Then, given these values, what shall we do to advance them? Of course, sometimes values conflict, which pushes organizations to think about the problem differently. Can we design the product in such a way that we don’t have to choose? This design approach can generate innovative and trusted products.
It’s impossible to totally avoid unexpected consequences, but leaders who bring together multidisciplinary product teams can improve the odds in their favor. A team made up of people from a variety of backgrounds and cultures—who feel free to openly share their experiences and opinions—can often uncover creative design solutions or potential design issues. But when a conflict in values is unavoidable, leaders must make intelligent, self-aware, deliberate choices. A leader should decide what’s most important to the company—and own it.
Most leaders already know the right action to take if the ultimate goal is to build trust. But some care more about cost reduction. Or increased efficiency. Or speed to market. The list goes on. And that’s fine. Leaders can choose to build things that don’t increase user trust, if they understand why they are making that choice and are willing to accept the consequences—expected or unexpected. Problems occur when leaders make choices that damage trust without realizing what they are doing.
Another misconception that leaders often have is that being ethical conflicts with being profitable. This is a false dichotomy. Companies have proven that they can produce reliable, powerful, user-friendly—and profitable—products. And while the products may not perform perfectly all the time, trusted companies have ways to monitor and detect problems, as well as methods for addressing issues quickly and effectively.
My dream is that within 20 years, corporate leaders won’t need to ask ethicists or other advisers about human or societal impacts that could result from product design decisions. I hope the answers will be internalized into corporate cultures so that asking questions such as “Are we sure this is a good idea?” is just part of what organizations consistently do.
A company’s brand is, by definition, a contract of trust. Yet in business, brand trust can erode overnight. CEOs and C-suite leaders across the organization can communicate the importance of trust to their company’s mission and establish clear ethics guardrails. Indeed, establishing clear policies for ethical usage of technology—an important first step in earning trust—could benefit their businesses. Ultimately, individual employees are acting based on their best understanding and awareness of an organization’s policies and values. This is no small matter. They will be making deliberate decisions about trust that will manifest in their company’s strategy, purpose, and market performance. Moreover, if leaders don’t own the trust and ethics agenda, decisions will be made in a diffuse way. CEOs have an opportunity to provide clarity, education, and ongoing communication. With the entire enterprise aligned behind the C-suite’s guidelines on ethical technology and trust, CIOs can help ensure that tech strategies, development efforts, and cyber approaches support those guidelines.
One of the finance function’s primary responsibilities is to build and maintain trust among customers, business partners, and investors. Yet rising expectations of transparency are making it more difficult for finance to meet this responsibility. Consider this scenario: Using drone-based cameras, analysts identify a potential issue in your company’s manufacturing or distribution facilities that your operations team missed. Analysts unexpectedly bring up the issue on an earnings call. Markets now expect companies to respond to situations such as this in near real time. Failure to do so raises doubts, which in turn can erode market trust. To meet this challenge, finance organizations will likely need to collect more data from across the enterprise and deploy advanced analytics that enable real-time reporting. They can also collaborate with peers to educate employees on the value that ethics and trust help create. Finally, CFOs will be able to help their companies deliver the kind of detailed, accurate, and timely responses that markets—and the analysts and investors who watch them—demand.
Cyber risk threat vectors have evolved rapidly, and attacks have become increasingly sophisticated, deliberate, and unrelenting in nature. Fifty-seven percent of companies participating in Deloitte’s 2019 Future of Cyber Survey experienced their most recent cyber incidents within the past two years.33 And the risk isn’t just that cyber incidents will destroy trust in the classical sense. The opportunity cost of what cyber vulnerabilities can prevent organizations from doing can be far greater: The specter of cybercrime and its fallout can cast a shadow over an organization’s efforts to turn technology to better use, strangling innovation and slowing digital transformation efforts to a crawl. It can also affect the bottom line, quickly and dramatically. One survey found that 48 percent of respondents had stopped using online services that reported data breaches.34 The issues of ethical technology and trust will steadily capture CXO mindshare. CIOs have a responsibility to help other enterprise leaders become more tech-savvy and understand the impact their digital strategies can have on the organization’s trust brand.