ARN

12 ways customers can make bad tech buying decisions

So little time, so many ways technology decisions can go wrong

When you look at new technologies, are you like a kid in a candy store excited to try every latest innovation? Maybe a leader in your organisation is a technology gambler and ready to select vendors without sufficient analysis and due diligence?

Or perhaps the procurement manager, the project management office, or business stakeholders put tech selections through such exhaustive research that your organisation is left in innovation’s wake and stuck in the mud with legacy platforms?

These technology buying personas are found in many organisations, and they can undermine the ability of tech leaders to make wise and timely technology selections. Haphazard tech selection leads to wasted effort and technical debt, while overly methodical approaches slow the pace of innovation and thwart experimentation, smart risk-taking, and agile cultures.

These personas can derail your technology decision process in all sorts of ways, from bogging down your organisation’s technology evaluation process to impairing the decision-making around when to invest in technologies and which products or services to consider.

Here are 12 anti-patterns to watch out for. If you want to make wise technology decision, then don’t do the following:

1 - Accept executive input as a final decision

When the CEO or another influential executive asks the technology team to buy and implement a specific tech solution, it’s critical to take a few steps backward to understand the rationale.

What problem is this leader trying to solve, and how well does the solution meet expectations? All too often, I hear tech leaders accept the executive’s voice as an edict and not take steps to rationalise the approach or present alternatives.

One solution is to create the discipline of drafting and presenting one-page vision statements that focus on a problem, opportunity, or value proposition. Well-crafted vision statements define goals but are not prescriptive regarding solutions or implementations. Even if the tech team fills this out on behalf of the executive, it often leads to a discussion and debate on multiple solutions.

2 - Fail to solicit or consider customer input

As technologists, we sometimes make the same mistakes that executives make when jumping into implementations. We see the problem, we know a solution, and a sense of urgency drives us to implement the fix.

Unfortunately, by not including the customer’s voice in the decision-making process, or understanding the benefits (or not) to the customer, we can easily deliver capabilities that miss the mark. Often organisations even fail to formally define who the customer is for certain technology projects.

Defining a customer is easier when you are developing end-user applications by defining roles and personas. But finding a customer role can be more challenging when considering back-end capabilities, including infrastructure, security capabilities, middleware, libraries, or web services.

But technologists are part of the business too. Architects, business analysts, or technology leads can serve as proxies for the customer role when implementing back-end technologies. Ask them to provide requirements, identify acceptance criteria, make decisions on trade-offs, and rate their satisfaction with the implemented solution.

3 - Ignore existing standards and technologies

Historically, tech departments have struggled with creating and maintaining documentation and with communicating and managing standards. So, when an urgent request or top requirement surfaces, we’re more likely to seek new solutions rather than investigate and reuse existing capabilities.

This approach often leads to redundant capabilities, half-developed solutions, and mushrooming technical debt. Adding a “research internal solutions” step before or as part of investigating new solutions is a simple discipline that can increase reuse. When people recommend new technologies, create a process for estimating upgrades to legacy platforms or consolidating technologies with similar capabilities.

4 - Foster a one-vendor, one-approach tech culture

Ever hear someone state emphatically, “We’re an x shop,” as a way of curtailing any research, review, and consideration of other vendors or technologies? It’s one thing to have standards and preferred vendors. It’s another to be ignorant of third-party capabilities and to stymie discussion of alternatives.

Allowing the voice of a few strong platform advocates drown out any exploration and experimentation can lead to costly mistakes. Technology leaders should openly address this cultural anti-pattern, especially if it’s suppressing people from asking questions or challenging status quo thinking.

5 - Presume build or buy is the only choice

There is a wide grey zone between building solutions with custom code and buying SaaS or other technologies that provide out-of-the-box capabilities. In between are highly configurable low-code and no-code platforms, commercial partnerships, and opportunities to leverage open source technologies.

So build versus buy is an oversimplification. A better set of questions is whether the required capabilities help differentiate the business and what types of solutions deliver more innovation and flexibility over the long run.

6 - Assume APIs meet integration needs

Most modern SaaS and even many enterprise systems offer APIs and other integration options. But cataloging integration hooks should be only the start of the investigation of whether they meet business needs.

What data does the API expose? Are the desired views and transactions supported? Can you easily connect data visualisation and machine learning tools? Does the API perform sufficiently, and are there underlying usage costs that need consideration?

Approaches to accelerating reviews of integration capabilities include these three ways to validate APIs and leveraging low-code integration platforms.

7 - Fail to perform social due diligence

When we’re confronted with a long list of possible solutions, trusted information sources can help us narrow the playing field. Reading blogs, white papers, reviews, and research reports, and watching webinars, keynotes, and online tutorials are all key learning steps.

But one tool often left out is leveraging social networks to consult with experts. Two places to start include IDGTechTalk and #CIOChat, where many experts will provide advice and share alternative solutions.

8 - Skip the proof of concept

The art, craft, and science of selecting technologies involves designing and executing proof-of-concept solutions (PoCs) that validate assumptions and test for key strategic requirements.

PoCs are particularly important when validating emerging technologies or evaluating SaaS platforms, but even using agile spikes to review third-party technology components helps accelerate decision-making and avoid expensive mistakes.

The biggest mistake may be skipping the PoC, either because you believe what you’ve read, you trust the vendor, or you face too much time pressure. Even when a PoC green-lights a technology, what you learn from the PoC can help you steer priorities to feasible implementations.

9 - Develop elaborate decision matrices

When many people are involved in reviewing and evaluating new tools and technologies, one common approach to help drive a data-driven decision is to create a decision matrix spreadsheet. Features and capabilities are weighted by importance, then rated by a review committee. The spreadsheet calculates the aggregate scores.

Unfortunately, these tools can get out of hand quickly when too many people are involved, too many features are chosen, or arbitrary weightings are assigned. The spreadsheet ends up prioritising its author’s preferences, and people lose sight of what needs to be evaluated strategically by reviewing all of the bells and whistles.

Before embarking on a decision matrix, take a step back. Consider distilling the characteristics of the solutions down to the essence of the business problem, rather than requiring long lists of features to be evaluated by too many reviewers.

10 - Ignore long-term architecture, lifecycle, and support considerations

I’m a big proponent of evaluating technologies based on ease-of-use and time to value, but that doesn’t mean longer-term architecture, maintenance, and support considerations aren’t important or don’t require evaluation.

The key is to decide when to evaluate them, what are the key considerations, who will be involved in the review, and how long to invest in the assessment. A good way to do this is to separate the gating concerns that tech teams should consider at the start of an evaluation from the longer-term factors that should be inputs to the decision-making process.

11 - Omit SLA, data protection, and security reviews

Time pressure or (blind) faith in your chosen technology are poor excuses for skimping on reviews of service level agreements (SLA) and evaluations of vendor security and data protection practices. The key to doing these reviews well is having the necessary expertise, negotiation skills, and tools -- and an efficient evaluation process, so that technologists and business sponsors don’t perceive the reviews as bottlenecks.

Larger organisations that perform SLA, data protection, and security reviews in-house must be time-efficient and focus their efforts on aligning the evaluation with the top risks. Smaller companies with insufficient expertise should seek outsiders with expertise in the solution domain.

12 - Delay financial and legal reviews

Last on my list, but certainly not least, is financial and legal reviews. The anti-pattern here is waiting too long to bring in the experts needed to conduct them.

Consider that many SaaS offerings, API services, and cloud-native technologies have consumption-based pricing models, and the operating costs may not meet budget or financial constraints.

Legal reviews are particularly important for companies in regulated industries or companies that operate globally, and reviewing compliance factors in both cases can be especially time-consuming. For both financial and legal reviews, delays can be costly.

Don’t wait until the end of the technology review process to bring in financial and legal expertise. My advice is to bring them in at the start and ask them to weigh in on what will need reviewing early on -- before any technology selection decisions are made. Further, don’t overtax your financial and legal resources by having too many evaluations in progress at once.

Trying to juggle multiple technology evaluations is unrealistic for many companies, and leaders should prioritise their shopping efforts. If they do, I promise you that smart, comprehensive, and efficient technology reviews are possible.