Technology & Innovation
GeoTech Cues January 28, 2022

Cybersecurity in service delivery

By Andrés de Jongh

As in any era of exponential growth, the speed at which benefits are created for society is closely followed by potential threats that must be guarded against. Thus, in the Fourth Industrial Revolution, it is not counterproductive to promote eternal pessimism when it comes to progress. Rather it is a call to action that allows society to be equally innovative in mitigating risks. 

Few issues expose the natural risk and reward relationship of technology as the increasing cyber vulnerability of key infrastructure as smart city and smart service delivery solutions become more widespread in the developed and developing world. The benefits for citizens are straightforward and powerful, essentially improving quality of life through more efficient services, freeing up resources for personal and collective growth. The potential threats have taken more time to come to light, and the consequences of not taking them seriously are only recently being understood. 

A clear and timely example is the Colonial Pipeline ransomware attack. Without diving deep into the details of how it happened, it is clear that even the most advanced infrastructure, which delivers one of the most important resources for any economy, is only as reliable as its weakest component. The fact that the attack was initiated in a relatively simple way – through a compromised password of a remote access account – makes the risk all the more evident. 

Up to two decades ago, the primary concern of service delivery companies and governments was securing physical infrastructure with physical countermeasures. Now, with the widespread use of advanced digital tools, which make services more efficient and safer, the digital space is where vulnerabilities abound. Colonial Pipeline is one of the largest and most recent publicly reported cases, but one relatively small warning was the Rye Brook Dam (New York) cyberattack in 2013. Another example can be seen in 2015 when several power outages occurred in Ukraine that left hundreds of thousands without service for hours,  and it was later determined that the cause was a cyberattack through a relatively simple method of email phishing. 

The relative simplicity of these attacks not only emphasizes the need to increase readiness at all levels of service delivery, but also underlines how the potential threat points are multiplying at breakneck speed. It’s estimated that the number of IoT device connections worldwide in 2019 was approximately 10 billion, then reaching approximately 14 billion in 2021, and the number of IoT connections should increase to around 30 billion by 2025.  

Enabling internet connectivity in a wide array of items is one of the cornerstones of community-centered service delivery. As mentioned previously, it’s precisely the access to highly specialized and specific data from each community that will allow for unprecedented deployment of customized solutions, at costs that are accessible to almost all cities. The key challenge is that if these systems are not designed with cybersecurity at their core, each one of those IoT connections is a potential risk to the stability of essential services for millions of people. 

Last, but certainly not least, these fundamental cyber risks not only pose a threat to the efficient delivery of key services, but also to personal information of individual citizens. Service delivery is rapidly becoming more digital on the infrastructure front, but the range of electronic government solutions that have been deployed is even broader. Everything from tax payments to medical information is exchanged with regional and local authorities on a daily basis, and as smart city solutions become more interconnected, so will databases containing citizen data. 


External threats to personal information are on the more aggressive side of the risk spectrum, but for many citizens, the implementation of highly digital solutions in public service delivery immediately brings up privacy concerns related to how legitimate authorities handle it. The increasing presence of IoT connected devices and machine learning algorithms at the core of key services opens the door to collecting personal information in very small increments and has already reached a point with which citizens are not comfortable. Progressively it’s less about filling out and sending a digital form with several key pieces of personal information at once, and more about thousands of sensors – and the algorithms behind them – perceiving and analyzing every single interaction, in many cases without the person knowing about it in real time. 

The risks and challenges of privacy in cutting-edge service delivery do not reside only with governments and authorities. A smart city ecosystem is composed of a very broad spectrum of public, private, and public/private providers, all of which commonly share some or all their data. The check and balances dynamic that must exist for privacy to be preserved has to evolve on two fronts: first, through proactive and modern regulations developed in higher levels of government, in order to guarantee a stable playing field across specific cities and regions; second, robust citizen engagement in the development of regulations, and more importantly, in reporting privacy concerns.

But what exactly are the concerns that citizens are bringing up? The main source of unease, because it has become a relatively common occurrence, is data leaks. This can include: passwords, account numbers, tax documents, medical information, addresses, among others. These elements come to light when external, unauthorized actors seize information. Almost equally concerning is the case of information crossing. This happens when data is collected for a specific purpose, by a specific institution, and shared, simultaneously or at a later date, with other institutions that will use it for a completely different purpose. It’s not uncommon for this to happen when the institutions come under the umbrella of a broader organization, such as an entire city, and while it might have been included in the terms and conditions, many times these agreements are not proactively explained. 

The concerns mentioned above are in the realm of data collection, location, and sharing. But there is another equally important area that causes acute unease for many citizens: surveillance. The issue even has geopolitical implications, because it is known that some countries have taken a very proactive stance on collecting information about its citizens simply to know what they are doing at all times, without any relation to service delivery. However, the connection is very real and direct, because in many cases the same technologies that are being used for improving key services can generate data that facilitates surveillance.

On the positive side, there are several measures authorities and companies can implement to mitigate these risks. First and foremost, service delivery technology must prioritize informing customers and/or citizens about when and why their information is being collected. This goes hand in hand with broader and more ambitious digital identification efforts, where citizens have a centralized hub to interact with authorities, and can monitor in which databases their information lives. Also, without revealing any sensitive information about the solution, organizations should inform end-users and regulators how the information is used to generate the final result. Any and all forms of transparency with citizens will generate important levels of goodwill towards a specific technology, and towards city managers in general. 

After ensuring maximum transparency in the collection of data, city managers should put in place detailed and strict policies of data aggregation and anonymization. Any details that could directly or indirectly allow employees of the city, or third parties, to identify a specific citizen should be erased or cloaked. In addition to this, data collection technologies should follow specific guidelines that limit their scope to only the information that is necessary to complete their designated process within the ecosystem. Extra data can not only make processes inefficient but also represents genuine risk. 

On a final note, regarding privacy protection, while there are no perfect solutions and all come with a certain amount of risk, one practice that can add a level of security is only implementing IoT that has the capacity to process the raw data locally, so the transmission to cloud servers does not put personal information at risk. 

Ethics and Machine Learning

One of the most important areas of focus that needs attention in the future is developing frameworks and policies to ensure the highest degree of objectivity and ethics in the decisions that are made using machine learning, which potentially affect millions of citizens at a time. This challenge is not unique to government service delivery – in fact, it is probably one of the top areas of study in all fields of technological advancement nowadays, and will only increase over the next decades. Algorithms are evolving in complexity and range, leading to efficiencies that were considered impossible only a few years ago, but with the side effect of becoming less transparent. 

The advances in complex algorithms for next-generation service delivery are deeply related to another area of growth: big data in urban environments. As mentioned above, the exponential increase in IoT enabled devices that feed information to city managers has created immense databases with information about an unprecedented quantity of citizens, public assets, and processes. The days of narrow samples from where larger conclusions can be extrapolated are rapidly coming to an end. If a city authority wants to measure the amount of solid waste being generated on a monthly basis, it no longer monitors only certain collection points and then calculates a reasonable average. It can place cost-efficient sensors in every collection point and know them in real time. 

Big data allows for urban knowledge on a massive scale. But the way the data is converted into knowledge – and eventually into decisions and policy – is through algorithms that are able to process an almost infinite amount of information, in a short period of time, and represent with metrics that are simple to digest for the people who are responsible. These algorithms are changing on a daily basis, sometimes through improvements that humans include, but most of all through real-time self-improvement as they come in contact with more and more real information. The second case is the one that requires the most attention. Machine learning makes processes more efficient almost on a real-time basis, but the way it’s happening is not necessarily understandable by the officials who are ultimately responsible for the outcomes. This represents an accountability challenge in two ways: first, in a more straightforward sense, it’s difficult to measure the efficiency and impact that a particular city management team is having if the process which they followed is not fully understood; second, without understanding why an algorithm is making certain adjustments, it’s impossible to measure and control potential bias. In essence, the design and implementation of AI have advanced exponentially, and solutions for monitoring it have to catch up.

While transparency and accountability are values that are important across all industries and organizations, it’s safe to assume that there is consensus on their fundamental nature in relation to governance. Few subjects garner more global attention – although it varies from region to region, and culture to culture – than the checks and balances that should exist within government, and in its relationship with the populations they serve. These checks and balances are essentially processes that guarantee that every action of a government must have a reasonable explanation, and that negative actions will be corrected. Combine this essential element of society with the rapidly evolving nature of AI, and the challenge for stakeholders becomes evident: how can we monitor and correct processes which we increasingly don’t understand? Is AI making cities more efficient at the cost of making them less inclusive

As we grapple with all these challenges and questions on how community-centric service delivery is evolving, it’s important to ground the analysis with specific cases and trends that are being used in cities around the world. Some projects and policies are based on true and tried methodologies that are now being augmented by access to big data and machine learning. For example, in Medellin, there is an ecosystem of control centers that have been coexisting and supporting each other for more than a decade. The first is the Integrated Metropolitan Emergency and Security System, from where 10 government agencies respond to emergencies in the city. The second is the Early Warning System, which integrates information from over 100 sensors of different types, to assess in real-time risks of hydrometeorological and air quality. Last, they have a Mobility Control Center that focuses on intelligent transport systems, logistics, and citizen engagement. Although centralized control infrastructure has been in place for decades, they have only recently begun being deployed more broadly, taking advantage of a much wider supply of technology, with lower price points. Also, the way these control centers operate is changing rapidly, going from solutions that focus primarily on maintaining agencies coordinated to solutions that emphasize data analysis and real-time decision making. 

The state of play is not limited to traditional government agencies and investment in technological infrastructure. Private actors are getting involved in public service delivery solutions at an exponential pace, while exploring innovative ways to “interpret” cities as a whole. One such case is Citibeats, an ethical AI company that is capable of analyzing social media interactions – numbered in the hundreds of thousands – to understand the topics of concern for citizens in real time and lead to more robust decision making by authorities, with a clear focus on inclusion. Their most recent deployment – in Panama during the Covid-19 pandemic – allowed them to identify key trends of challenges that the population considered top priorities for the government to resolve, including but not limited to economic reactivation policies, support for more digital transformation in workplaces, and growth programs for SME’s. 

Additionally, great advances are being made not only in the amount of IoT enabled devices that are available and deployed, but especially in the capabilities that each one of these devices has, in order to deliver high-quality and above all, useful data for data driven-governance. Here is where the concept of edge computing becomes a key factor. The concept refers to all the processing that occurs in the device itself, versus what needs to be sent to centralized servers for analysis. Edge computing not only has an effect on strictly technical capabilities like speed and reliability of processing, but also incorporates advantages regarding some of the challenges these networks face on cybersecurity and data privacy. All this together means that government institutions will have almost an infinite amount of data to work with, at a speed that until only a few years ago was considered unreachable. 

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

Image: Photo credit: Maarten Van Den Heuvel via Unsplash