Can we be public and private at the same time? The coded thrill of cyberconnectivity

Everything we do online is observed and recorded by people we can't see and mostly don't even know about. UCalgary researchers look at how to increase security and maintain privacy in a world where they're constantly under attack.

Think of the Internet as what a clothesline used to be when everyone used them. The clothesline was a set of coded messages sent into the public domain. You pegged up a piece of clothing and reeled it out for all to see. Here is my story, you said.

Here is my husband’s new shirt, size XL, stained with red wine. Here are our bedsheets, fraying at the edges. Here is my daughter’s bottomless underwear. On second thought, you hang up the underwear in your daughter’s room.

You trusted your neighbors to leave your clothing alone, but you knew they'd gossip about it. Your secrets were out there for all to interpret. For all to judge.

And so it is on the Internet.

We are constantly making decisions about trust. Who can be trusted with the personal information we reel out into the public domain? Which platforms, pages and publishers will take good care of our personal data, our credit card numbers and our reputations?

Sometimes the sheer convenience of the Internet trumps our privacy concerns. It’s so easy to shop online. It’s so fun to share photos on Facebook. It’s so handy to see customized ads.

But then we open our laptops and read about a data breach at the Bank of Montreal, or the Cambridge Analytica scandal at Facebook. We see an ad about migraine treatments and we wonder how the computer knew . . . We read about a teenaged girl who lost the war against cyberbullying. And there’s a survey about health care records – we decide it’s okay to release that information into the digital world.

But how far should we extend our trust? When does a leap of faith become a reckless plunge, in this world of ransomware and security breaches? We reel out our coded laundry list of messages, and we either close our eyes or we take a keen look around, knowing that our conception of privacy is shifting beneath our feet.

Privacy for the public self

“We are redefining privacy and what it means to us,” says Dr. Emily Laidlaw, PhD, an associate professor in the Faculty of Law at the University of Calgary. Laidlaw works in the area of civil law reform, and her research investigates how laws can best protect privacy and online reputations.

The legal definition of privacy is tough to pin down. Laidlaw believes privacy should be about the right to be left alone, to be anonymous, and to be free from targeting. But you shouldn’t have to give up your participation in online communities to achieve privacy, she says. Your dignity and autonomy should be protected even as you participate in online conversations, without being monitored or sanctioned.

Laidlaw cites the case of Gregory Alan Elliott, a Toronto man who, in 2012, harassed two women’s rights activists on Twitter. Although the tweets were vulgar and obscene, Elliott was acquitted of criminal harassment. The judge commented that Twitter users waive their rights to privacy in their tweets. On Twitter, anyone can see your tweets (in most cases). So the ruling implies a binary choice between privacy and online participation: you either accept the indignities of online conversation or you resign from it. Period.

“Historically it’s been an all-or-nothing proposition,” says Laidlaw. “Now we’re suffering through the weaknesses of that approach. Now law reform is about trying to find a balance between privacy as this idea of seclusion, a right to be left alone, and privacy in public, such as participating in online spaces.”

This balance is reflected in the current debate on the “right to be forgotten,” where people have asked Google to remove links to certain search results. People would like to think they can trust a search engine to act magnanimously in making decisions about online content. But, warns Laidlaw, “people don’t really own their public narrative. You want to think you have some control over the story that’s said about you but often it’s about peer review, or what other people say.”

Laidlaw says our attitudes toward privacy have shifted along with our attitudes toward the Internet, which burst into the 1990s as a wild spirit heralding freedom of expression. A tool for democracy and for challenging the status quo.

Says Laidlaw, “The narrative that’s existed for a long time is that intermediaries (service providers, social media platforms and search engines) should absolutely not be regulated; the only way for the Internet to be free and open is for us to impose no responsibilities that they don’t undertake themselves. But we’re realizing that this hasn’t been effective to address some persistent and harmful forms of online abuse.”

Examples of abuse such as cyberbullying and revenge pornography are all too prevalent. Witness the stories of Rehtaeh Parsons and Amanda Todd, two Canadian teenagers who committed suicide after being ferociously cyberbullied. As the world witnessed such abuse, alongside the “fake news” complications of the 2016 US election, the regulatory pendulum swung toward a more heavy-handed position. On the upper end of the regulatory spectrum is Germany, with its 2017 Netzwerkdurchsetzungsgesetz (NetzDG) law, which requires social media sites to speedily remove hate speech, fake news and illegal material. Here in Canada, regulations on mandatory privacy breach notifications will take effect in November 2018, under the Personal Information Protection and Electronic Documents Act (PIPEDA).

But Laidlaw is in favour of the subtler approach governing bodies are now beginning to take as they balance free speech and privacy. “In terms of regulation, it’s going to be very different from the top-down forms we’ve seen – it’s going to be a sort of regulatory nudging, like finding ways to incentivize companies to be responsible.”

In this gentler world of regulation, intermediaries are less liable and more responsible. They can make their own decisions about what to allow online. But it puts intermediaries in the awkward position of trying to figure out how to monitor their online spaces, making judgment calls about what’s acceptable online content and what is not.

In Canada, there are lots of decisions to be made around online privacy laws. “We are in a period of change,” says Laidlaw. “It’s a moment of opportunity, it’s a blank slate right now where we can develop our laws on privacy or intermediary liability. The mistake would be sitting around and waiting for a case to wind its way through the courts and for some judgment to be pronounced.”

Laidlaw sees privacy in public spaces as an upcoming legal hotspot. “It’s tricky because you rub up against the freedom of speech argument,” she says. “If you’re out in the public domain and someone takes a picture of you, have they invaded your right to privacy?”

Some privacy arguments are complicated by factors like sexualized photography, as seen in the case of the “CanadaCreep” Twitter account. The account was shut down in 2017 for posting photos that were taken without consent and were sexual in nature.

The Conversation‘CanadaCreep’ case highlights need for better privacy laws

“We need more case law on this issue of privacy in public,” says Laidlaw. “In Canada we’re all over the map.” Internationally, attitudes differ toward the issue. In the United States, the law tends to suggest that citizens do not have the right to privacy while in public places. “But in Europe,” Laidlaw explains, “even when you’re in public there’s this zone of interaction between you and others that is seen as essentially private.”

The process of determining where to place one’s trust in the online world is like walking a tightrope, Laidlaw says. Americans tend to trust in the paramount need for freedom of expression, and they are happy to fall on that particular side of the tightrope. “They take quite a blunt approach,” observes Laidlaw. “I’m investigating the more nuanced approaches.”

Cyberhistory repeats itself

Issues like privacy in public spaces seem to catch us by surprise. The technology sector advances so rapidly that states and regulating bodies often struggle to keep pace. But has this always been the case? As we approach new technological peaks, a look back at the history of surveillance and signal intelligence offers insight into the way history repeats itself, and ways to learn from the past.

Dr. John R. Ferris, PhD, a professor in the Department of History in UCalgary's Faculty of Arts, has a unique perspective on the issue of online trust. A specialist in international and strategic history, Ferris is the authorized historian for the British communications intelligence agency (Government Communications Headquarters). He says there are compelling links between today's concerns with communications privacy and events of the early 20th century.

Ferris points out that the desire to invade privacy goes way back, but the form of that invasion is constantly evolving. Take the practice of signals intelligence – the intercepting and interpretation of electronic communications – as an example. “Back in the early 20th century,” says Ferris, “it was only governments who were reading the mail of other governments. The more unpleasant governments would read the mail of their own citizens, too. But they couldn’t read the mail of Canadian citizens.” Technology exposes our communications much more broadly, and shifts the power dynamics of signals intelligence.

Ferris calls our current time the second age of signals intelligence. The first age started with World War I, with military strategy as the primary purpose. Today, the purposes – and methodologies – of signals intelligence vary widely. “The age we live in features a fundamental transition in terms of how our messages can be intercepted and listened to,” says Ferris. “Around the time of the First World War, very few people figured out that armies and navies would use radio in a war, and that this would expose a huge amount of encoded traffic to analysis by foreign code breakers.”

Just as today’s courts have been caught somewhat flat-footed by technology-driven privacy issues, so early 20th-century governments, too, were caught by surprise. “It was only when the shift to radio became so pervasive that governments realized, 'we absolutely need this technology.'”

Britain sprang into action, setting up a legal regime for censorship which allowed them access to every radio message and every telegram sent across the Atlantic. Says Ferris, “By mid-war they were reading all the mail too – almost a billion letters, by my calculations.”

When code-breakers started to hit their stride (think Bletchley Park), they were successfully identifying weak links in the German system. Attacking weak links is a concept that withstands the test of time. “In computer security,” Ferris says, “all you really need to do is not be the easiest prey. As long as you’re faster than someone else the bear will eat the other person.” Whether you’re breaking German codes or hacking email messages, the same principle applies – especially in today’s communications-heavy culture. “We’re not playing a Bletchley Park game, but this is our normal life,” says Ferris.

Between the 1920s and 1980s, says Ferris, you can see the roots of today’s signals intelligence environment. In the 80s, relatively few everyday people would have messages intercepted by foreign states. And files were the stuff of squeaky drawers or cabinets, locked with tiny keys. To access files, you had to physically open an office door and a cabinet. “Now,” Ferris says, “anyone wanting to attack your files simply attacks your computer.”

Like the shift to radio in the early 20th century, the shift to online communications came as something of a shock to intelligence experts. In 2001, about a decade after the Internet was born, major national signals intelligence agencies began to realize that this was the game to play.

Once the Internet began to take center stage in the intelligence game, concerns about government surveillance of citizens began to surface. “When the Edward Snowden case became public,” says Ferris, “there was public paranoia around the governments of the U.S. and the United Kingdom viewing everyone’s communications.”

Ferris is sanguine about the government’s desire to surveille everyday citizens. “In the U.K. for example,” he says, “there are 80 billion texts and messages every day. For the U.K.’s communications headquarters to read all of those is impossible. They’re just not interested. They’re trying to weed out people like you and me so they can focus on what they really need to go after.”

Ferris is more trusting of governments and less optimistic about everyday crime. “Criminals are more likely to be your computer security problem,” he says.

So what can we learn from the connections between military history and Internet privacy? Ferris sees a social shift in communications trends. “In the age of social media,” Ferris says, “we have drifted into a kind of social signaling where we try to push information about ourselves out there, in hopes someone we find interesting will find us interesting.”

But Ferris also sees an ongoing trend in privacy breach prevention. “As a historian I deal with signal security for armies or foreign offices, and they have the same issues ordinary citizens do. You can have very best regulations but people don’t always do what they are supposed to do.”

In other words, history is telling you not to become the weakest link. Most of us know how to secure our information; we simply can’t be trusted to do what we’re supposed to do.

The creeping urgency of cyberawareness

The need to think carefully before we trust new technologies and intermediaries is underscored by Dr. Tom Keenan, Ed.D., a professor in the Faculty of Environmental Design and adjunct professor of computer science who specializes in intelligent communities and computer security. Keenan taught the first Canadian course in computer security in 1974 and recently published Technocreep: The Surrender of Privacy and the Capitalization of Intimacy.

The title of his book evokes not only the insidious progression of technology, but the unsettling feeling that overcomes us as we confront our deepest fears about it. Keenan recommends facing those fears head-on, so we can “make rational decisions as citizens, software designers, creators, parents and consumers.”

Keenan's analysis of security issues moves well beyond the usual Internet suspects, extending into consumer advocacy. “The point of Technocreep, and all of my work,” says Keenan, “is that consumers should be more aware of what is going on in the background, and have greater control over how their data is used.”

Keenan describes technological progress as a “relentless march,” as the industry creates new devices and finds new ways to apply them. One such tool is outlined in a study cited by Keenan, recently published by Stanford Professors Yilun Wang and Michael Kosinski. The results suggest it’s possible to accurately determine a person’s sexual orientation from a single facial image, with a success rate of between 71 per cent and 81 per cent. The program isn't perfect, but it’s more accurate than a human.

"For business purposes, prediction doesn’t have to be 100 per cent accurate," says Keenan. "Suppose an online travel agency has access to your face, which is probably all over the Internet, and uses a facial analysis to guess that you’re gay.” You’ll see a series of ads that make assumptions about your sexual orientation, say, for gay cruises. Even if they're wrong 19-29 per cent of the time, this recognition trick has just gained the travel agency a significant competitive advantage.

Keenan says part of the problem is the magnetic force of technological progress. “It has become so easy to let our technology do things for us,” he says. “Why go to the bank when you can take a picture of a cheque and it goes right into your account? Why fight the lines in a mall when Amazon will deliver your product the next day? Why struggle with a paper map when online navigation services will guide you turn by turn and even alert you to traffic jams and speed traps?”

These services may appear to be free, but they do come at a cost. “The information you give up by using them is worth a fortune to companies that want to know more about you,” Keenan points out, citing new features coming to GoogleMaps in the summer of 2018.

Tech site iMore.com describes “For You” as a set of features that can “show you new places in your area that you might like, and will let you know about restaurants opening up near you, the hottest spots to check out, and any other cool haunts that may be of interest to you based on places you’ve already been and places you’ve rated positively. It’ll even show you how well your personal tastes will match up with restaurants you’ve never been to.”

If you’re a tech enthusiast who trusts Google to curate all your various tastes, interests and “cool haunts,” you’ll read this description with gusto, adding your own exclamation marks as you read. But for those who share Keenan’s concerns about the creepiness of technology, this blurb reads as an ironic commentary on the naiveté of today’s tech consumer.

Between the lines are dark subtexts, reminders that Google knows where you are, where you’ve been, and, possibly, where you’re going. In fact, as Keenan might say, it’s telling you where to go. Thanks, Google!!

Of course, the risks associated with the increasing intelligence and reach of technology don’t stop at Google. Keenan’s advice is to think hard about each new piece of technology that enters your life and weigh the risks and benefits. Here are just a few of Keenan’s tips:

  • Want a fitness monitor? Sure, but know that it can tell when you're having sex (2:30 a.m., burned 150 calories, took zero steps). Who will it tell?
  • Smart home gadgets? OK, but they are notoriously hackable. A researcher got into a person’s entire home network via a smart tea kettle.
  • Online surveys? You may learn your personality type or which celebrity you are most like, but, as we learned in the Facebook scandal, you may also be opening the door on your private data to unscrupulous operators.

The concerns around placing too much trust in technology and its guardians can also have more profound implications, for example with ransomware and the Internet of Things. In a paper delivered at the RSA Asia Pacific Japan Conference in Singapore in July 2018, Keenan predicted severe security problems associated with “smart” health care devices that are on computer networks. Imagine this scenario:

A hospital IT manager gets a dreaded email. "We own you . . . but we're not going to encrypt your patient files. That's so 2016. Instead, we know you have a Siemens brand MRI machine, a Picker brand X-Ray, and hundreds of Baxter IV infusion pumps. They have vulnerabilities and we know them. You don't. So, until you send us 50 Bitcoins, we will randomly kill a patient every other day."

The scenario looks bleak, but Keenan does have solutions up his sleeve. He recommends increasing the sharing of vulnerable information through both formal and informal channels, and putting pressure on manufacturers to act promptly to resolve security issues. He also suggests a “good Samaritan” law that would not hold health institutions responsible for the fall-out of ransomware negotiations, and increased awareness of ransomware offenders who may leave a trail on Internet spaces that are unsearchable by most engines.

It’s easy to see the ransoming of hospitals or the creepiness of GoogleMaps apps as reasons to be judicious in trusting technology. Keenan says consumers should be diligent in understanding the privacy and security implications of the technologies they use. Today’s digital world is, after all, full of different challenges from the world where secrets were embedded in laundry and clotheslines. “Unlike the gossipy neighbors who will eventually move or pass away,” says Keenan, “the online world never forgets.”

Digitizing health care records: access vs. privacy

In Alberta, health care professionals are about to implement a new service that puts the pros and cons of online information into relief. In 2017, a team of researchers including Dr. Doreen Rabi, MD, associate professor and clinical endocrinologist in UCalgary's Cumming School of Medicine, surveyed Albertans on the idea of accessing health records online. The Online Patient Health Portal Survey, which elicited over 1,500 responses, explored Albertans’ perceptions of a site they could use to access, manage, and share their own health information. The results of the survey show the extent to which Albertans trust their provincial government and its health administration arm, Alberta Health Services, to keep their health narratives safe.

Once implemented, the portal will serve two key functions:

  • Digitize and amalgamate Albertans’ health care records in a central location where practitioners can share them as necessary in order to improve care
  • Give patients online access to their health care records

Rabi recognizes the stakes are high for the health portal’s success. She says, “The work I’m doing with the province and other colleagues is trying to figure out how to create policy and use technology to integrate those records while honouring the standards we have for privacy.”

The portal will be vital to the streamlining of accessible information. “There is an assumption that information is already being shared seamlessly,” Rabi says. “But in reality, because of regulations and legislation on confidentiality, health information is often siloed and medical records end up being fragmented.”

A seamless process for accessing health information will be particularly valuable to patients with complex health conditions. When patients have multiple physicians, prescriptions and treatments, they can get stuck trying to sort out contradictory plans. “It runs like this,” says Rabi. “Doctor A wants me to do this but Doctor B wants me to do something that conflicts, and I’m caught in the middle trying to negotiate what I’m supposed to do.” The portal makes dialogue between practitioners and patients easier, and will turn health records into living documents so patients can play a more active role in their own care.

Albertans seem to recognize the project’s enormous potential: survey results show overwhelming enthusiasm. When asked if people should have access to their own health information online, 76 per cent strongly agreed and 16 per cent agreed.

Indeed, the health portal is about more than just convenience. It’s part of an Alberta Health Services (AHS) push for quality improvement, and there is hope that the portal will help AHS improve things like consistency, identifying best practices, and analyzing success or complication rates across procedures.

Government officials want Albertans to see the portal as a trustworthy source of information which provides not only convenient access but security and privacy. The survey shows that access to vital health information is more important to Albertans than their anxieties over potential security breaches. But there are anxieties. The survey shows that 32 per cent of respondents were extremely concerned about security and 20 per cent were concerned.

“Some of the respondents were acutely aware of the security implications,” says Rabi. “There are valid concerns about the information being housed in one spot where it could be compromised. Some respondents suggested caching some information in certain areas so that if there were a breach it wouldn’t all be compromised.”

Respondents also noted that they did not want third parties to have access to their health care data. “In the age of Cambridge Analytica,” says Rabi, “third party data collection is very topical. We would never anonymize data and send it off for entrepreneurial purposes. Patients will not be seeing ads for drugs popping up on their screens.” Rabi notes that there have been breaches in the U.K. and the U.S., so these anxieties are informed by real-world situations.

The sensitive nature of health records means the portal project will be doubly open to privacy concerns. “People feel vulnerable when they share medical information,” says Rabi. “They may agree in principle that people need to see their information, but no one wants to share their Pap smear results with random care providers.”

Ultimately, the conversation comes back to trust. Making information visible to patients online means more transparency, which means patients are more likely to trust their physicians. As long as they can trust the security of the portal, Albertans are on board with an online health information system designed to improve overall care.

Sharing or selling? What happens to our personal data

If Dr. Hooman Hidaji, PhD, were reading the health care portal survey results, he would likely applaud respondents for their security concerns – especially those regarding third-party data collection. Hidaji is an assistant professor of Business Technology Management (BTM) at UCalgary’s Haskayne School of Business. Together with colleague Raymond Patterson, Hidaji has been researching funding models used by publisher websites such as newspapers. The results show that these websites typically have two kinds of revenue streams: either subscription revenue, or revenue from the sale of information about their users, or some combination of both. The third parties active on the website could be providing a service such as targeted advertising, site customization or site administration. Or they could provide additional functionality, like a link to a video or a map.

When it shows you an ad for your favourite band or a well-loved restaurant chain, or when it highlights your own province’s news headlines, there is a third party company operating behind the scenes. The owner of the website is earning money by allowing third parties to access your personal data.

Hidaji cites the examples of the Financial Times, which requires readers to subscribe to its site, and The Washington Times, which visitors can read online for free. But let’s recall Tom Keenan’s warning about the illusion of online services that seem to be free of charge. In the case of The Washington Times, a total of 36 third-parties are sharing user information – compared to 22 at the Financial Times. So reading a “free” newspaper comes at the cost of personal information, mined by third parties for whom this data is valuable.

According to Hidaji, most users do not want to view targeted ads, but the practice of web tracking is growing by leaps and bounds. And website publishers have no real obligation to be transparent about what happens to user data.

The practice merits further examination – and increased regulation. Emily Laidlaw’s recommendation for a thoughtful, nuanced approach could be applied here. Rather than simply imposing all-encompassing rules on website publishers, it could be productive to encourage a sense of responsibility in the managers of such sites, in order to balance users’ desire for privacy with the profit-driven goals of third-party organizations.  

Many privacy statements illustrate how publisher sites inform visitors about what happens to their personal information. If you read the privacy statements on most major websites, they describe how user information is collected and shared. They make vague references to using the data to improve user experiences, and they often assure readers that their data will not be shared without their consent – except as outlined in the privacy statement or as permissible by law.

But as Hidaji points out, most of us do not read the fine print. Privacy messages are often seen as nuisances that pop up on our screens and do little to promote transparency. However, with privacy issues in the news, such as the Facebook and Cambridge Analytica scandal, the conversation is changing. In addition, the European Union recently launched its General Data Protection Regulation (GDPR) plan, a strict set of rules designed to protect users’ personal identifiable information. As Emily Laidlaw suggests, intermediaries like Facebook could be bound in the future by a sense of responsibility for its users’ privacy concerns, rather than defaulting to the maximum sharing of information.

In Hidaji’s opinion, the biggest privacy concerns centre on a lack of transparency. “Users don’t know what’s going on,” he says. “It’s really hard to track where this data goes and where it comes from. A targeted ad for tennis rackets pops up on your screen, but you don’t remember where you last read about Wimbledon.”

Personal information can also be used to determine a user’s socioeconomic status, according to Hidaji. That information can be used for purposes such as price discrimination. You might be offered a mortgage rate based on data mining that suggests a certain salary range, for example.

According to Forbes.com, a 2012 study on 200 online stores found that if Internet visitors came to a site from a discount site such as Nextag.com, they would be charged as much as 23% less than other visitors for the same merchandise. Amazon and Staples were among the companies that varied their prices by geographic location – by as much as 166%. These practices are legal, but are obviously unfair.

Decisions about trusting web publishers with your personal information are difficult to make. These decisions are more about a willingness to treat your data as a commodity, and trusting third parties to behave responsibly once they have your information – at least until the future brings new regulations into play.

Taking privacy into the future: quantum computing

The future of online security is as hard to pin down as a dancing photon. Perhaps that’s what makes it so appealing – and so frustrating. Just when you think you’re beginning to understand the binary relationship between zeroes and ones, physicists start talking about quantum computing. It’s a new wave of computing that asks us to rethink the trust we’re placing in various aspects of information processing and online security.

Dr. Barry Sanders, PhD, a professor in UCalgary’s Department of Physics and Astronomy, and director of the Institute for Quantum Science and Technology, specializes in quantum information. His work in quantum computing focuses on two areas: how to build a quantum computer and how best to use it. Both areas have the potential to revolutionize the future of cybersecurity.

“Right now it’s a very exciting time,” says Sanders. “Billions of dollars have gone into quantum computing start-ups and major computing companies, just in the last 12 months. It’s a do-or-die moment.” Quantum mechanics has massive potential for the computing industry, and Sanders feels it could be a game changer. “But we’re in early days,” he says.

Quantum computing is so much more powerful than traditional computing because it uses quantum bits (qubits) instead of regular bits. And quantum, don’t forget, refers to the smallest of all possible particles. So the quantum computer exploits the unique ways nature behaves on its smallest scales.

To explain the awesome power of quantum computing, Sanders points out that it operates under a different set of rules from traditional computing. “It’s not about making smaller, faster chips,” Sanders explains. “In computing we typically represent all information as zeros and ones and then process it following certain rules of logic. Quantum computing is a completely different concept. Instead of thinking of strings of zeros and ones, we think of them in super-positions like waves that interfere – in some sense all possible strings co-exist.”

These are not easy concepts to visualize. It takes years to understand the basics of quantum mechanics. But try picturing qubits as waves that flow into one another, interfering with each other as they move. When those interference patterns translate into computing functions, a correct answer to a problem could surface as all other answers are eliminated by destructive interference.

The unique characteristics of quantum computing add up to efficiencies – efficiencies that could have incredibly positive impacts on a variety of industries, but that could also have dire implications for cybersecurity. For credit card encryption, for example. Quantum computing is like a nuclear bomb that could explode traditional security methods. Keep in mind that when you shop online, credit card encryption is built around factoring problems (think grade six math: 5 and 3 are the prime factors of 15; but when you use huge numbers it becomes a very difficult problem to solve). “Quantum computers are extremely efficient at factorization,” says Sanders. “You think blockchain is safe or your credit card number is safe? Well, none of that is safe if quantum computing gets up and running. We will need to find a different way. And that’s where quantum cryptography comes in.”

Quantum principles are a sort of double-edged sword. “It’s like the inverse of the old saying, ‘What the lord giveth, the lord taketh away,’” says Sanders. “Quantum computing takes away our security blanket, but quantum cryptography gives us a potential match for the immense code-breaking power of quantum computing.”

Quantum encryption, like quantum computing, could revolutionize the way we think of encryption. And, according to Sanders, it has a natural resistance to hacking: “If anyone tries to eavesdrop, you can detect it.”

The encryption function is based on something called quantum key distribution, which offers a secure way to communicate without being intercepted by a third party. “If I send you a message, I lock it with a key through a mathematical operation, and you receive an encrypted key, then unlock it with your key. It’s about ensuring you have compatible keys,” Sanders explains.

Another area of potential for quantum computing is machine learning. Sanders isn’t sure yet whether machine learning and quantum computing will lead to any solid findings, but he says there is some hope that quantum computing can tackle one of the building blocks of machine learning – optimization problems. In simplest terms, optimization is about using an algorithm to make something the best it can be. Like the process of identifying faces in photos. “You train a computer and it optimizes things,” says Sanders. “The computer trains, gets experience, gets rewarded or punished, then gets better at its task. And the more quickly it learns, the better.”

Sanders, who is also interested in meta-level discussions on computing, says that quantum computing is only one of many promising directions in the field. “It’s not just a binary choice of quantum computing or not,” he says. “We should be open to other approaches, like using DNA to augment computing or to tap into the brain. Then there’s the whole idea of approximate computing, which suggests that we don’t need the computer to be deterministic; things can happen in more than one way. So we can exploit errors of the machine. There are so many alternatives that could all revolutionize society.”

What does this vast realm of computing futures mean for the question of trust? If computers, and therefore cybersecurity, are moving targets, can our information and communications ever be totally secure?

 “This has been the war since the dawn of time,” says Sanders. “You will always have to find ways to encrypt better and find ways to break codes.” In the manner of British signal intelligence during WWI, each side will always learn from the other. “As long as humans are on the earth,” Sanders says, “we will be fighting this battle. There will never be a resolution.”

If quantum computing is any indication, it’s possible that the cyberworld will become more and more difficult for the lay person to comprehend. Perhaps Tom Keenan’s advice to be aware of technology and all its implications will become increasingly important.

Or, perhaps we will find new ways to offload that responsibility. “Humans love to put social frameworks around things like security issues,” says Sanders. “We love to formulate international policies. So for us mere mortals who don’t understand computing in all its complexities, we can try trusting those governing bodies to make responsible decisions.”

But trust doesn’t translate into knowledge where computing is concerned. “There will always be unknowns,” Sanders says. “That’s the case with quantum computing and machine learning. We’re still waiting to see if they will really work.”

In the meantime, we can continue to learn from the past. We can continue to redefine privacy, and nudge online powerhouses toward respecting our privacy. We can stare down technological creepiness even as our screens memorize our faces and commodify our data. And we can reel out our coded messages, pegging them in careful lines toward the open air, knowing full well the secrets they reveal.

Participate in a research study

–  –  –  –  –

Explore our programs

Law

History

Computer Science

Medicine

Business

Physics and Astronomy

 

–  –  –  –  –


ABOUT OUR EXPERTS

Dr. Emily Laidlaw, PhD, is an associate professor in the University of Calgary’s Faculty of Law. She researches in the areas of internet and technology law, copyright law, media law, human rights and corporate social responsibility. She has a particular interest in online abuse, intermediary liability, privacy and free speech. Read more about Emily

Dr. Thomas P. Keenan, Ed.D, is a professor in UCalgary's Faculty of Environmental Design and an adjunct professor in the Department of Computer Science. Tom's research interests lie in information security, cyberwarfare, privacy and security, and the social implications of technology. His current research focuses on the positive and negative effects of technology adoption both in the developed and developing worlds, and how technology can be a driver for economic and social development. Read more about Tom

Dr. John R. Ferris, PhD, is a professor in UCalgary's Department of History, a Fellow at the Centre for Military and Strategic Studies, a Fellow of The Royal Society of Canada, and the authorized historian for Britain's Government Communications Headquarters. His research interests lie in 19th- and 20th-century Imperial history, 20th-century British and European diplomatic history, inter-war Europe and modern warfare. Read more about John

Dr. Doreen Rabi, MD, is an associate professor and clinical endocrinologist in UCalgary's Cumming School of Medicine. Her primary research interests include diabetes and cardiovascular disease health outcomes, hypertension, sex and gender in health, patient engagement, systematic reviews, and eHealth and mHealth. Read more about Doreen

Dr. Hooman Hidaji, PhD, is an assistant professor in UCalgary's Haskayne School of Business. His current research interests include economics of information systems, information systems security, online privacy, ICT supply chains, and social networks. Read more about Hooman

Dr. Barry Sanders, PhD, is a professor in the Department of Physics and Astronomy at UCalgary’s Faculty of Science, and director of the university’s Institute for Quantum Science and Technology. He is Editor-in-Chief of the New Journal of Physics, a Fellow of the Royal Society of Canada and a Canadian Institute for Advanced Research Senior Fellow in quantum information science. His current research interests include nonlinear quantum optics, quantum computing, quantum communication and quantum control. Read more about Barry

girl using vr headset

Stay updated

Get our latest stories in your inbox every month. Sign up for our newsletters and receive stories of inspiration and discovery, event notifications, breaking news, and more.

Thanks! You'll get a confirmation email shortly.