Forget TikTok bans. Think about connected Chinese cars.

This week our Congress is crafting legislation to remove TikTok from our lives. It is as misplaced as Nancy Reagan’s “Just Say No to Drugs” campaign — and perhaps as empty a gesture. Yes, there are real issues with all that social media metadata ending up on some Chinese hard drive, and the notion that ByteDance can separate its US operations and clouds from Chinese ones shows how little our lawmakers understand technology.

Instead, I would like you to think about the following companies: Nio, Inceptio, XPeng and Zeekr. Ever heard of any of them? They are all major Chinese EV companies, and all of them pose a much bigger threat to our data privacy and national security than TikTok. By way of reference, China has hundreds of car makers, and they are all obligated to transmit real-time data to their government. Now they want to sell them here and are doing road tests.

Last fall, another bipartisan group of lawmakers sent letters to these and other Chinese EV makers, wanting to get more transparency about the data they collect on their cars . I haven’t seen the responses, but guess the truthful answer is “we collect a lot of stuff that we aren’t going to tell you about, and we have to share it with the CCP.”

Last week, the Commerce Department issued its own request and asked for public comments as part of its role to consider its own series of regulations. The department is investigating the risks of EV and other connected vehicles on national security and potential supply chain impacts of these technologies. Interestingly, it is finally acting on a Trump Executive Order. Another bipartisan effort. The document linked above asks for a lot of details about obvious data collection methods. If I were running a Chinese car company,  I would think about designing systems that would be less obvious. One of the things these Chinese car makers are quickly learning is how to become better software companies, thanks to the Tesla business model. (Tesla also makes and sells its cars in China BTW.)

While there are hundreds of millions of TikTok US users, some of whom are adults, the threat from car metadata is much more pernicious, especially when it could be paired with phone location data from passengers sitting in the same vehicle. What they both have in common is that all this data is being collected without the user’s knowledge, consent, or understanding who is actually collecting it.

Those phones have been recording our movements for quite some time, without any help from China. There are so many stories about tracking the jogging routes of US service members at foreign military bases, or tracking a spouse’s movements, or figuring out where CIA employees stop for lunchtime assignations near Langley, etc. But that pales in comparison to what a bunch of CPUs and scanners sitting under the hood can accomplish on their own.

Remember war driving? That term referred to someone in a car with a Wifi scanner who could hack into a nearby open network. That seems so quaint now that a car could be doing all the work without the need for an actual human occupant. I guess I will go back to watching a few Taylor vids on TikTok, at least until the app is removed by Congress. In the meantime, you might want to review your own location services settings on your phones.

Dark Reading: Typosquatting Wave Shows No Signs of Abating

A spate of recent typosquatting attacks shows the scourge of this type of attack is still very much with us, even after decades of cyber defender experience with it.

Ever since the Internet became a commercial entity, hackers have been using it to impersonate businesses through a variety of clever means. And one of the most enduring of these exploits is the practice of typosquatting — i.e., using look-alike websites and domain names to lend legitimacy to social engineering efforts. In my latest post for Dark Reading, I talk about the recent series of attacks, why they continue to persist, and ways that enterprise security managers can try to prevent them from happening, although the fight isn’t an easy one.

 

Dark Reading: NSA’s Zero-Trust Guidelines Focus on Segmentation

Zero trust architectures are essential protective measures for the modern enterprise. The latest NSA guidance provides detailed recommendations on how to implement the networking angle of these measures.

As more workloads shift to the cloud by businesses, there is more need to adopt zero trust computing strategies. But the notion of “untrusted until verified” is still slow to catch on, although in some areas of the world, such as in the United Arab Emirates, zero trust adoption is accelerating.

To try to bridge the gap between desire and implementation and also provide a more concrete roadmap towards zero trust adoption, the US National Security Agency has been publishing a series of guidelines over the past few years, covering device protection and user access. The latest one was released this week concerning network security.

My story on what this means for zero trust is in Dark Reading today, and it can be found here.

 

 

Dark Reading: How CISA Fights Cyber Threats During Election Primary Season

When US election integrity and security took center stage as a political football after the 2020 Presidential race, the Cybersecurity and Infrastructure Security Agency (CISA) is doing what it can to dispel security concerns around this year’s trip to the polls.

CISA, along with several other organizations, has beefed up various cybersecurity support resources for elections in general, including more programs for state and local elections officials, and for volunteer poll workers. In my post for Dark Reading today, I describe some of these efforts and explain the unique combination of cyber and physical security needs to ensure our democracy continues with free and fair elections.

When it Comes to Cybersecurity Practice, Don’t Be Okta.

I have written about Okta for many years, back when they were an upstart single-sign-on security vendor coming of age in the era of cloud access and identity. By way of perspective, back in 2012 (when I wrote that first Network World review when I gave them high marks for their product), most of Okta’s competitors offered on-premises servers and the cloud was more of a curiosity than a sure bet. Seems very quaint by today’s standards, when the cloud is a foregone conclusion.

However, you can count me now as one of their detractors. This is why my hed says when it comes to cybersec practice, don’t be Okta.

Let’s look at the timeline over the past couple of years. During 2022 alone, they experienced a phishing attack, another major breach, and had their GitHub source code stolen. Then last year they saw two separate supply chain attacks that affected most of their customers, along with leaked healthcare data on almost five thousand of their employees. And last fall yet another attack on MGM and Caesars resorts was blamed on a flaw in their software. It is almost too hard to keep track, and I can’t guarantee that I got all of them.

Some of these attacks were due to clever social engineering, which is embarrassing for a cybersec company to fall into. Now, all of us can have some sympathy over being so compromised, and I know I have almost fallen for this trick, particularly when it comes in the form of a rando text message that asks how I am doing or something that appears innocent. But still: Don’t Be Okta. Spend less time multitasking, particularly when you are on your phone, and focus on every message, email, and communication that you receive to ensure that you aren’t about to play into some hacker’s hands. Pay attention!

Some of these attacks were due to bugs in how Okta set up their software supply chain, or poor identity provisioning, or a combination of things. Okta’s CSO David Bradbury was interviewed over the weekend and promised to do better, rolling out various security controls in an announcement last week. That’s great, but why has it taken so long?

One weakness that was repeatedly exploited by attackers was Okta’s lack of attention when it came to provisioning admin-level users. They are now making MFA required for all customer admin consoles. They are also requiring passwordless access for all internal employees. It has taken them, what 15 years and multiple hacks to figure this out? Neither of these things are heavy lifts, yet I still talk to many folks who should know better who have resisted implementing these tools to protect their personal account logins. Don’t Be Okta!

How about better and more transparent breach reporting? Some of those supply chain attacks took months to figure out the depth, nature, and cause — and then for Okta to properly notify its customers. As an example, the September attack was initially estimated to impact one percent of its customers, before being revised to 100%. Oopsie. That doesn’t bode well for having a trusted relationship with your customers. The EU requires breach notification in two days. Was someone asleep or was management at fault for taking its sweet time getting the word out?

Buried in all the good cheery messaging from last week was this little tidbit: “As more features are rolled out in early access mode, the company intends to turn the controls deemed most beneficial on by default.” Ruh-oh. Turn them all on by default, right now! You want security by design?

Bradbury ironically admitted that security has never been a value historically for the company, and claims that almost half of their engineers are now working on security, apart from an actual security team. Just adding bodies isn’t necessarily the right move. Everyone needs to be focused on security, so I ask what are the other half of the devs doing that gives them a pass?

This isn’t the way forward. Don’t Be Okta! Take a closer look at your own security practices, and ensure that you have learned from their mistakes.

Dark Reading: Biometrics Regulation Heats Up, Portending Compliance Headaches

This year might be a boon for biometric privacy legislation. The topic is heating up and lies at the intersection of four trends: increasing artificial intelligence (AI)-based threats, growing biometric usage by businesses, anticipated new state-level privacy legislation, and a new executive order issued by President Biden this week that includes biometric privacy protections.

But things could backfire: A growing thicket of privacy laws regulating biometrics is aimed at protecting consumers amid increasing cloud breaches and AI-created deepfakes. But for businesses that handle biometric data, staying compliant is easier said than done. I explore the issues surrounding implementing and regulating biometrics in a post for Dark Reading today.

The coming dark times for tech won’t be anything like the 2000s

My former colleague Dave Vellante has written a nice comparison of the current tech  contraction with the dot-com-bust of 2000. He makes interesting points about several factors, such as the roles played by Netscape and OpenAI as innovators and Nvidia and Cisco as major players, the stock market bubbles, and risks and rewards along the way. However, he is missing one critical element: the population of tech workers has been shrinking and the pace of the layoffs is increasing. And the way people were laid off now and then has some big differences.
Granted, back in 1999-2000 there were fewer overall tech workers, (as an example, Microsoft went from around 40k in its 2000 staff to 200k today, Amazon grew from a few thousand to >1M) and many of the tech companies were small, in some cases very small. The big difference then and now was the pace of the layoffs. Back then, they happened quickly. But now tech co’s have been laying off workers since the pandemic, but in big numbers by comparison.
In the past few years there have been several rounds of layoffs at Spotify, ByteDance, Amazon, Twillio, LinkedIn, SecureWorks, Microsoft, Meta, and Twitter which added tens of thousands to the unemployment lines. And sure, there are plenty of startups that even got their series A’s that went under in the past couple of years — that is to be expected. But the contemporary situations are from established companies that are having their first serious contractions.
Will some of these folks start their own companies? Sure. But tens of thousands? Not so sure.
But part of the problem — perhaps most of the problem, apart from the lowering business demand in the tech sector — is the way we all are returning to work in the spaces previously known as our offices. Back when we were in the midst of the pandemic, remote work took on new relevance and meaning, and caught on quickly around the world in many different ways, some good and some bad. Take Slack for example: they went 100% to remote work back in 2020. Other tech companies were less enthusiastic, such as Google. And what I have seen is these less enthusiastic companies were some of the first to revoke home-working policies and mandate people to return to one of their offices.
Early on in the pandemic, I put together this pod with my partner Paul Gillin about some things to consider for the newly minted home worker. Those were more practical suggestions on what equipment to purchase and how to best secure your home. For a somewhat different treatment, I wrote this blog for Avast on how to craft equitable policies to encourage and evaluate home workers. Those pieces seem rather quaint now, and they assumed that once all this remote stuff was unleashed, we would stay that way.
That is not the case anymore. Four years later, many tech workers are told to return to their offices. And the changes are confusing as companies try to adjust and populate their expensive downtown real estate. This makes no sense to me, and the latest dictums from Dell (for example) are guaranteed to have them lose more people, which could be the hidden reason for them. It is almost that we forgot the productivity gains during Covid when people worked from home. Or companies were eager to see their workforce sitting in those awful bullpens where everyone was on headsets.
The return to the office says one thing about tech: they have done a lousy job at developing middle managers, who are insecure about handling underlings that they can’t see or be physically nearby. It really is a shame: all this remote access tooling that has been developed over the decades, and the one group of companies that you would think would figure this out are the first in line to recall their staffs.
Also gone from today’s tech offices are some of the lavish benefits that were put in place to attract talent. Anyone getting free massages, catered meals and taking yoga classes these days? It would be an interesting cohort for some research project.
Finally, there is my own cohort — tech journalists, who are being laid off once again in this latest cycle. The difference between now and 20-some years ago was we had printed magazines that were supported by millions in ad revenues to pay the way. Then the web wiped out that business model and giants such as PC Week and Infoworld went scrambling. Some of the large tech-oriented websites such as Vice have shut down, and I am sure more will follow.
Yes, AI is exciting, and there is a lot of work being done — even by humans — in the field. But it requires real capital and real brainpower, and not just sock puppets and a cute dot com name. Or at least, I hope so. And building a trust with your remote employees: the best ones will eventually migrate to companies with more liberal remote policies.

Fighting election misinformation

Last week I wrote about the looming AI bias in the HR field. Here is another report about the potential threats of AI in another arena. But first, do you know what the states of California, Georgia, Nevada, Oregon, and Washington have in common? Sadly, all of them have election offices that received suspicious letters in the mail last year. This year is already ramping up and many election workers have received death threats just trying to do their — usually volunteer — jobs. Many have quit, after logging decades of service.

I have been following election misinformation campaigns for several years, such as writing about whether the 2020 election was rigged or not for Avast’s blog here. By now you should know that it wasn’t. But this latest round of physical threats — many of which have been criminally prosecuted — is especially toxic when fueled with AI misinformation campaigns. The stakes are certainly higher, especially given the number of national races — CISA has released this set of guidelines.

And the election threats aren’t just a domestic problem. This year will see more than 70 elections in 50 countries — many of them where people are voting for their heads of state, including India, Taiwan, Indonesia and others. Taken together, 2024 will see a third of the world’s population enter the voting booth. Some have seen huge increases in online voters: India’s last national election was in 2019, and since then they have added 250 M internet users, thanks to cheap smartphones and mobile data plans. That could spell difficulties for first-time online voters.

All this comes at a time when social media trust and safety teams have all but disappeared from the landscape, indeed the whole name for these groups will become a curiosity a few years from now. Instead, hate mongers and fear mongers celebrate their attention and unblocked access to the network. (To be fair, Facebook/Meta announced a new effort to fight deepfakes on WhatsApp just after I posted this.)

While the social networks were busily disinvesting in any quality control, more and better AI-laced misinformation campaigns have sprouted, thanks to new tools that can combine voices and images with clickbait headlines that can draw attention. That is not a good combination. Many of the leading AI tech firms — such as OpenAI and Anthropic — are trying to fill the gap. But it is a lopsided battle.

While it is nice that someone has taken up the cause for truthiness (to use a phrase from that bygone era), I am not sure that giving AI firms this responsibility is going to really work.

An early example happened in the New Hampshire presidential primary, where voters reported receiving deep fake robocalls with President Biden’s voice. As a result, the account used for this activity was subsequently banned. Expect things to get worse. Deepfakes such as this have become as easy as crafting a phishing attack (and are often combined too), and thanks to AI they are getting more realistic.It is only a matter of time before these attacks spill over into influencing the vote.

But deepfakes aren’t the sole problem. Garden-variety hacking is a lot easier. Cloudflare reported that from November 2022 to August 2023, it mitigated more than 60,000 daily threats to US elections groups it surveyed, including numerous denial-of-service attacks. That stresses the security defenses to organizations that never were on the forefront of technology, something that CISA and others have tried to help with various tools and documents, such as the one mentioned at the top of this post. And now we have certain elements of Congress that want to defund CISA just in time for the fall elections. Bad idea.

Contributing to the mess is that media can’t be trusted to provide a safe harbor for election results. Look what happened to the Fox News decision team after it called — correctly — Arizona for Biden back in 2020. Many of their staff were fired for doing a solid job. And while it is great that Jon Stewart is back leading Comedy Central’s Monday night coverage, I don’t think you are going to see much serious reporting there (although his debut show last week was hysterical and made me wish he was back five days a week.

Of course, it could be worse: we could be voting in Russia, where no one doubts what the outcome will be. The only open question is will its czar-for-life get more than 100% of the vote.

The looming AI bias in hiring and staffing decision-making

Remember when people worked at jobs for most of their lives? It was general practice back in the 1950s and 1960s. My dad worked for the same employer for 30 or so years. I recall his concern when I changed jobs after two years out of grad school, warning me that it wouldn’t bode well for my future prospects.

So here I am, ironically now 30-plus years into working for my own business. But this high-frequency job hopping has also accelerated the number of resumes that flood a hiring manager, which in turn has motivated many vendors to jump on board various automated tools to screen them. You might not have heard of companies in this space such as HireVue, APTMetrics, Curious Thing, Gloat, Visier, Eightfold and Pymetrics.

Add two things to this trend. First is the rise in quiet quitting, or employees who just put in the minimum to their jobs. The concept is old, but the increase is significant. Second and the bigger problem is another irony: now we have a very active HR market segment that is fueled by AI-based algorithms. The combination is both frustrating and toxic, as I learned from reading a new book entitled The Algorithm, How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now.Hilke Schellmann It should be on your reading list. It is by Hilke Schellmann, a journalism professor at NYU, and it examines the trouble with using AI to make hiring and other staffing decisions. Schellmann takes a deep dive into understanding the four core technologies that are now being deployed by HR departments around the world to screen and recommend potential new job candidates, along with other AI-based tools that come into play to evaluate employees performance and try to inform other judgments as to raises, promotions, or firing. It is a fascinating look at this industry, fascinating and scary too.

Thanks to digital tools such as LinkedIn, Glassdoor and the like, sending in your resume to apply for an opening has never been easier. Just a few clicks and your resume is sent electronically to a hiring manager. Or so you thought. Nowadays, AI is used to automate the process: These are automated resume screeners, automated social media content analyzers, gamified qualification assessments, and one-way video recordings that are analyzed by facial and tone-of-voice AIs. All of them have issues, aren’t completely understood by both employers and prospects alike, have spurious assumptions and can’t always quantify the important aspects of a potential recruit that would ensure success at a future job.

What drew me into this book was that Schellmann does plenty of hands-on testing of the various AI services, using herself as a potential job seeker or staffer. For example, in one video interview, she replies to her set questions in German rather than English, and receives a high score from the AI.

She covers all sorts of tools, not just ones used to evaluate new hires, but others that fit into the entire HR lifecycle. And the “human” part of HR is becoming less evident as the bots take over. By take over, I don’t mean the Skynet path, but relying on automated solutions does present problems.

She raises this question: “Why are we automating a badly functioning system? In human hiring, almost 50 percent of new employees fail within the first year and a half. If humans have not figured out how to make good hires, why do we think automating this process will magically fix it?” She adds, “An AI skills-matching tool that is based on analyzing résumés won’t understand whether someone is really good at their job.” What about tools that flag teams that have had high turnover? It could be two polar opposite causes: a toxic manager or a tremendous manager that is good at developing talent and encouraging them to leave for greener pastures.

Having my own freelance writing and speaking business for more than 35 years, I have a somewhat different view of the hiring decision than many people. You could say that I either had infrequent times that I was hired for full-time employment, or that I face that decision multiple times a year whenever I get an inquiry from a new client, or a previous client that is now working for a new company. Some editors I have worked for decades as they have moved from pub to pub, for example. They hire me because they are familiar with my work and value my perspective and analysis that I bring to the party. No AI is going to figure that out anytime soon.

One of the tools that I have come across in the before-AI times is the DISC assessment that is part of the Myers-Briggs, which is a psychological tool that has been around for decades. I wrote about my test when I was attending a conference at Ford Motor Co. back in 2013. They were demonstrating how they use this tool to figure out the type of person who is most likely to buy any particular car model. Back in 2000, I wrote a somewhat tongue-in-cheek piece about how you can use Myer-Briggs to match up our personality with that of our computing infrastructure.

But deciding if someone is an introvert or an extrovert is a well-trod path, with plenty of testing experience over the decades. These AI-powered tools don’t have much of this history, are based on data sets that are shaky with all sorts of assumptions. For example HireVue’s facial analysis algorithm is trained on video interviews with people already employed by the company. That sounds like a good first step, but having done one of those one-sided video interviews — basically where you are just talking to the camera and not interacting with an actual human asking the question — means you aren’t getting any feedback from your interviewer, either with subtle facial or audio clues that are part of normal human discourse. Eventually, in 2021, the company stopped using both tone-of-voice and facial-based algorithms entirely, claiming that natural language processing had surpassed both of them.

Another example is capturing when you use your first person pronouns during the interview — I vs. we for example. Is this a proxy for what kind of team player you might be? HireVue says they base their analysis on thousands of questions such as this, which doesn’t make me feel any better about their algorithms. Just because a model has multiple parameters doesn’t necessarily make it better or more useful.

Then there is the whole dust-up on overcoming built-in AI bias, something that has been written about over the years going back to when Amazon first unleashed their AI hiring tool and found it selected white men more often. I am not going there in this post, but her treatment runs deep and shows the limitations of using AI, no matter how many variables they try to correlate with their models. What is important, something Mark Cuban touches on frequently with his posts, is that diverse groups of people produce better business results. And that diversity can be defined in various ways, not just race and gender, but by people with disabilities both mental and physical. The AI modelers have to figure out — as all modelers do — what is the connection between playing a game, or making a video recording, and how that relates to job performance? You need large and diverse training samples to pull this off, and even then you have to be careful about your own biases in constructing the models. She quotes one source who says, “Technology, in many cases, has enabled the removal of direct accountability, putting distance between human decision-makers and the outcomes of these hiring processes and other HR processes.”

Another dimension of the AI personnel assessment problem is the tremendous lack of transparency. Potential prospects don’t know what the AI-fueled tests entail, don’t know how they were scored or whether they were rejected from a job because of a faulty algorithm or bad training data or some other computational oddity.

When you step back and consider the sheer quantity of data that can be collected by an employer: keystrokes on your desktop, website cookies that record the timestamp of your visits, emails, Slack and Teams message traffic, even Fitbit tracking stats — it is very depressing. Do these captured signals reveal anything about your working habits, job performance, or anything really? HR folks are relying more and more on AI-assistance, and now can monitor just about every digital move that an employee makes in the workplace, even when that workplace is the dining room table and the computer is shared by the employee’s family. (There are several chapters on this subject in her book.)

This book will make you think about the intersection of AI and HR, and while there is a great deal of innovation happening, there is still much work to be done. As she says, context often gets lost. Her book will provide plenty of context for you to think about.

CSOonline: How to strengthen your Kubernetes defenses

Kubernetes-focused attacks are on the rise. Here is an overview of the current threats and best practices for securing your clusters. The runaway success of Kubernetes adoption by enterprise software developers has created motivation for attackers to target these installations with specifically designed exploits that leverage its popularity. Attackers have become better at hiding their malware, avoiding the almost trivial security controls, and using common techniques such as privilege escalation and lateral network movement to spread their exploits across enterprise networks. While methods for enforcing Kubernetes security best practices exist, they aren’t universally well known and require specialized knowledge, tools, and tactics that are very different from securing ordinary cloud and virtual machine use cases.

In this post for CSO, I examine the threat landscape, what exploits security vendors are detecting, and ways that enterprises can better harden their Kubernetes installations and defend themselves.examine the threat landscape, what exploits security vendors are detecting, and ways that enterprises can better harden their Kubernetes installations and defend themselves.