Re: about SecuriID on mobile devices



John Doe <john....@xxxxxxxxxx> wrote:

I was reading about RSASecurID(http://en.wikipedia.org/wiki/Securid)
two-factor authentication when I saw it was available for several mobile
devices (e.g. iPhone, Windows Mobile, BlackBerry, Java ME, Palm,
Symbian, ...).

I thoughtSecurIDwas possible only because it wasn't possible (to an
extend) to read the seed stored in the device (because the hardware
tokens were (again to an extend) tamper-resistant). To me there is no
tamper resistance in any mobile device like an iPhone, a Windows Mobile
or a BlackBerry device, so with little software-only reverse engineering
(that could be coded once & then automated) the seed can be read quite
easily.

Did I miss something or does it make the authentication a one factor
authentication ?

Hi JD,

Thanks for raising these interesting questions.

Security pros have been debating the relative security of physical
tokens and software token-emulation modules in two-factor
authentication (2FA) for over a decade. Some, like Mr. Doe, have
suggested that since virtually all SecurID devices, hardware or
software, are today using AES in a known manner, the "secret key" in
software is just a piece of data, essentially a so-called secret that
could be easily plucked out, "known," and used at convenience by
nefarious parties.

Abstractly, this debate really boils down to what protection the
device has, physical or virtual, for the SecurID secret it holds, the
SecurID's AES cryptographic key -- although it should also be obvious
to all that a 128-bit secret is inherently somewhat less accessible to
casual collection than a typical password, the traditional exemplar of
"something known."

Discussions like this are fun, even informative -- and I'll play the
game -- but we ought to acknowledge early that this is a bit like
debating how many angels fit on the head of a pin. JD's question
presumes that InfoSec definitions are rigid, black and white with no
shading; cosmic law. That's seldom the case in the real world -- not
in security practice, nor in much else. The real topic here is the
trade-offs involved in implementing most security devices, balancing
between maximum security, cost, ease-of-use, and other operational
requirements.

The basis of the three-factor model for remote verification of
identity (something "known," something "held," and something "one is")
is that each of these factors resists compromise in a separate
dimension. That is, each has to be subjected to a separate attack.
(The list of factors could easily be expanded to include, say, where
someone is, or who someone knows, and often have been.) The degree to
which each "factor" can resist compromise or corruption is a worthy
topic -- but let's not presume to dice reality too finely, exalting
some implementations and condemning others with the certainty of
theologians. Scripture this ain't.

A brass key was once good evidence of "something held," until we added
a requirement that it resist replay. (It was quite a while later that
the SecurID allowed us to lock the user's ID validation request into a
tight time-frame.) Back then, as now, the holder of a toothed key was
expected to protect it against theft or illicit copying, and promptly
report it's loss.

As Ilmari Karonen suggested, a list of S/key OTPs, typically carried
in a wallet, is still good evidence that a unique paper list, with
OTPs that resist replay, is in the hands of a specific person. That
works for 2FA. For many years, probably even today, many people
carried OTP grid cards, issued by a continental banks to validate
their transactions. 2FA too.

Paul Rubin wrote:
#> If a token is copyable, then it is not a "something you
#> have" factor in two-factor authentication, since two people
#> might have it.

I disagree. It's obviously not a hardware token, so it lacks some
advantages and potential security attributes, but a Grid card of OTPs
certainly is a viable and practical second factor for 2FA. To get
mine, you'll have to come after me, personally.

These categories are malleable; lock them down too tightly and you may
as well spin in mystical circles and babble. Absolutist claims for
any of them are silly; we live and work in a relativistic world. With
a big enough hammer you can break anything. Anyone with an OTP token
and a phone can instantaneously share, or "copy," a PIN and an OTP
token-code, for a dozen other people -- any one of whom could
potentially masquerade as the legit token-holder. So what?

2FA presumes a bounded and managed universe, with appropriate
policies, management oversight, and audit to nail those who misuse the
technology. The first factor is susceptible to "human engineering;"
the second is vulnerable to theft; and the third, to crude surgery.
Savvy risk management tries to block or mitigate a reasonable
selection of threats with appropriate defensive measures. For the
purpose of our discussion, what matters is that each "factor" is only
relatively secure, but that's often enough.

Mr. Doe worries:
I also think that if a hacker is able to get control of your PDA for 5 minutes,
it can get the PDA's memory, read the seed & would be able to copy it &
generate valid tokens at any time. Then tokens become useless & only the
PIN remain hidden, so we go from a 2-factor authentication to a simple
PIN-based 1-factor authentication.

The bottom line, of course, is that because the SecurID AES "seed" or
key, the crown jewel, is held in software -- and inevitably has to be
decrypted when it is used to generate the token-code -- it is
ultimately accessible to the tools of a determined attacker with time,
repeated access, and some specialized knowledge. It also helps if he
doesn't have to worry about detection.

Risk, by definition, is what we find in the gaps between the threats
known, blocked, or mitigated, and those (some unknown) which are not.
Risk is managed most effectively when it is acknowledged.

RSA goes to some lengths to protect the SecurID seed -- but a PDA or
smart phone with a SecurID software app is simply never going to have
anything like the security provided for a AES key stored in a hardened
SecurID hardware token. That, fortunately, is not the same as saying
that the device can not offer significant resistance to the hit and
run theft, or rapid cloning, that J.D. hypothesizes.

When a RSA authentication server generates the seed or key for a
SecurID software module, that seed is typically assigned to a specific
device, identified by the serial number of the device, be it a PDA,
smart phone, or whatever. The SecurID application on the device will
verify the local serial number before it allows the seed to even be
imported.

RSA extends this mechanism to bind the encrypted seed to the device.
The 128-bit SecurID "seed" or key -- at rest, in the PDA's RSA data
store -- is protected by AES encryption. This layer of encryption,
typically the target of an informed attacker, uses an ephemeral key
(changing every time its used) which is generated within the phone,
using a random base and data (typically an internal serial number,
plus other metrics) unique to that device.

Without some reverse engineering and coding, plus some special
knowledge about the device that was the original host, a copied or
stolen phone's database -- holding the SecurID seed -- will not be
easily opened on another device. An attacker can not, for instance,
just copy the SecurID module off a BlackBerry and run it on another
BB.

Beyond the technology, perhaps the most robust line of the defense is
the degree to which many users -- particularly worker bees in
corporate environments -- depend upon and are virtually married to
these device. Indeed, for many, it is as much an extension of their
person as their wedding ring. It is continuously used, and protected,
in a way that no other device ever issued to them by their employer --
maybe including hardware OTP tokens -- has ever been.

Software apps for token-emulation on various platforms and hosts vary
quite widely, and -- as per Courtney's First Law -- discussing their
relative security really only makes sense in context, when someone
knows the application and the environment in which these
authenticators are to be used. That's the turf of a CIO or network
manager, who today often has a great deal of control over the
selection and configuration of mobile devices, as well as the security
policies and practices that are mandated for their users.

In some PDAs and phones, defensive options are considerably stronger
than in others -- and all have room for improvement as the mobile
platforms begin to offer stronger provisions for their own internal
process and data security. (TPMs look really useful; even fingerprint
sensors will be an option. Joe Ashwood noted that mobile phone SIMs
are already "smart cards." Physically, yes -- but today, access paths
to the SIM and the functions possible on SIM cards are severely
constrained. )

Nevertheless, the best case scenario, from a security point of view,
is surprisingly good -- at least in so far as it protects the
SecurID's AES seed or key from JD's suggested hit and run attack on
flash memory.

1. Many PDAs or smart phones require a PIN to unlock the device and
automatically reset that protection when the phone is left unused.

2. To release the stored (and encrypted) AES key in order to generate
a SecurID pass-code, many mobile devices can require yet another (user
defined) password or PIN.

3. Then, for 2FA, the user or consumer has to provide a third
memorized PIN to be mated with the SecurID token-code.

In devices which implement "Pin-Pad" mode, that PIN is added (no
carry) to the SecurID's PRN token-code to generate the pass-code that
is relayed, via a networked PC or workstation, to the remote RSA
authentication server to validate a user's request to access protected
resources.

(I won't get into the way in which the RSA Authentication Manager, on
the RSA authentication server, dynamically interacts with the phone-
based client to generate the SecurID AES key assigned to that phone,
except to note that key is never transmitted between the two. For
details, visit RSA Labs for CT-KIP at: <http://tinyurl.com/cbs5b6>.)

When generated on a PDA, phone, or some other smart device, the
SecurID token-code, plus the user's memorized SecurID PIN -- perhaps
mingled in the "Pin-Pad" mode -- is used, on a networked PC or
workstation, just like any other two-factor passcode. In the corporate
environment, that logon and session is usually further protected by a
VPN. The user's access and transaction pattern might also be tracked
by RSA's risk engine, what RSA calls "Adaptive Authentication," which
compares the pattern of use with a norm and can request supplementary
validation if it deems the risk associated with a transaction or
access request requires it.

The bad news, of course, is that -- even in the enterprise --
pressures for greater ease-of-use, and other operational concerns,
lead many institutions to cut back on these available protections,
notably the second password for OTP generation. RSA's AA, the risk
engine, is a huge commercial success, but it's usually being installed
to support consumer networks. Some corporate SecurID sites reject even
important features, like the device-specific db key, in order to allow
for automatic backup of all mobile devices, or the rapid reassignment
of users or devices, with "drop and restore" reactivation. Currently,
if a user forgets the password he selects to secure the SecurID seed
-- the second PIN above -- there is no retrieval, and some enterprises
balk at that. Vendors recommend, but the customer -- with her own
priorities -- selects the configuration.

Withal, in a SecurID-enabled PDA or phone, the defenses are wholly
still in software, so they offer only relative security for the stored
SecurID secret. Better than mere obfuscation -- but considerably less
than a true encryption scheme, with the symmetric key buried in bobby-
trapped silicone or safely held elsewhere.

Of far greater concern, ultimately, is the -- still rare -- active
attack that places a monitor in the phone's CPU that can watch the
seed's retrieval and use in the SecurID app. Some PDAs and phones
allow the user, or the enterprise, to establish policies that block
unauthorized apps from being installed on these devices. Some don't.

The inevitable vulnerabilities of a SecurID in software make the work
habits -- the physical and virtual security provided by the person who
carries or uses a PDA or phone with an OTP app -- much more important
than it would be with a physical SecurID token. Yet these mobile
devices apparently provide sufficient security, in the judgement of
many IT managers, largely because smart phones and PDAs have become so
useful and necessary that they become appendages; extensions of
ourselves; protected as well as the user's wallet, if not better.

JD:
I like the original idea of SecurID (or the CRYPTOCard equivalent), but
only because the hardware assures to an extend the protection of the
seed. To me doing it on a PDA makes it completely weak. :-\

Did I miss something ?

Physical, hand-held, one-time password (OTP) tokens are not all the
same either. With different engineering specs, different OTP tokens
offer more or less resistance to various specific physical or virtual
attacks. Some buyers say DPA-resistance, for instance, is a critical
issue; sometimes it's not even on the check-list. (To ameliorate
perceived risks, some large European banks routinely replace some
vendors' nominally "long-lived" OTP hardware tokens every year.)

One of the great strengths of the traditional hand-held OTP token --
something largely still valid for SecurID-enabled BlackBerries, PDAs,
etc. -- is that it removes the process of generating the
authenticating token-code from networked PC. Yet, in various OTP
products, from RSA and others, that traditional "air gap" between the
token and the network has been repeatedly compromised to leverage
security services, PKI among them, and other "smart card" features.

The emergence of the mobile phone as the Universal Token, however,
may have fundamentally redefined the market for personal
authenticators. It offers some institutions the very attractive
proposition of leveraging those phones with OTP apps to economically
offer all employees stronger-than-passwords authentication, instead of
(or in addition to) the 5-10 percent of their employees who might
carry SecurID hardware tokens.

The consumer market, laden with gargantuan potential, is dominated by
cost and ease-of-use concerns, since consumers typically choose to
accept 2FA or not. (And us wee folk, we love our mobile phones.) For
the enterprise, universal coverage might easily outweigh concerns
about less than perfect security on phone-based OTP modules, in an
admittedly already imperfect security matrix. (Isn't that, indeed,
what the mantra, "security in depth," is intended to both acknowledge
and address?)

Meanwhile, for good or ill, the market has spoken.

Since the mid-'90s, actually, buyers of authentication for IT
environments have increasingly demanded choices: a spectrum of OTP
alternatives that are stronger than static passwords, but with
multiple form-factors offering different levels of inherent security,
degrees in ease of use (and administration), at various prices. Some
buyers always go with the most robust hardware OTP token. As with
lingerie, no one size fits all.

In this, we must acknowledge competence: IT managers understand
tradeoffs. They make choices about risks and live with them daily --
and they sensibly insisted they can make those choices, for their
environments, their people, their assets, far better than anyone
else. Vendors took heed, even when purists howled.

Paul Rubin noted:
#> The idea of tokens is that they are uncopyable, or at least
#> difficult to copy (e.g. something like a smart card).

Close. The original concept called for a personal physical hand-held
device, difficult to clone, that could safely generate one-time
passwords that could are not susceptible to illicit capture and
replay.

When all tokens were hard, the trade-offs seemed minimal. The SecurID
first came to market in 1987, certified as secure by the NSA, as was
then common practice. RSA's "time-synched" SecurIDs won the enterprise
OTP market, largely because they were easier to use than those of its
competitors, which relied upon an interactive challenge/response
mechanism. Over the next decade, however, the physical, sealed, hand-
held token was the first element in the classical model to be
compromised for greater market penetration.

In this, RSA was a laggard among its competitors. RSA was pushed into
offering a SecurID module for PCs by a major customer in '96. Then, in
'99, RSA offered, via free download, SecurID modules for the Palm
PDA. The first corporation that demanded SecurID apps for its PCs
insisted that its internal physical security offered sufficient
protection for a mass installation. With SecurID for the Palm, RSA
finally decided that, with device binding, it could protect the
SecurID seed sufficiently -- and that its proprietary SecurID hash
(while still unpublished due to contractual commitments to early
customers) did not really need iron-clad security.

The OTP market endured far greater shocks than these compromises. In
1998, Paul Kochner's DPA and associated side-channel attacks tossed
all the OTP vendors, along with much of the applied crypto world, into
turmoil with an efficient physical attack that potentially gutted
every OTP product on the market. The risk of surreptitiously cloned
tokens soared. Some vendors reacted more quickly than others; some
never did. Then, in '99, the ANSI X9 committee withdrew the X9.9
standard, after it documented a DES-MAC attack that allowed an
attacker to brute force a seed after observing two challenge-response
pairs. That decertified a whole range of OTP products, pushing all C/R
products into event-synch to hide the challenge side of the pair.

Here and there, the sky fell, but almost no one noticed. By contrast,
OTP vendor design compromises, which slightly weakened the original
security model, seemed almost insignificant, with gains well worth the
trade-off. The spectrum of OTP products -- already with an installed
base of many tens of millions -- evolved, albeit painfully slowly.

Joseph Ashwood wrote:
..>> SecurID depends on a lot of things remaining secret.

Hi Joe. Sorry, but that's just not true. Never has been.

With the contemporary AES SecurID, the cipher is surely a non-issue.
Historically, RSA has always insisted that the *only* necessary secret
in any viable OTP system is the secret seed or key. I've been a
consultant to RSA since before the first SecurID was sold, and I wrote
a lot of the early SecurID literature to emphatically declare just
that.

..>> Either the algorithm had to remain secret in order for the
..>> device ID numbers to not be useful, the algorithm was
..>> published. The number device ID is printed clearly on
..>> hardware devices, this is necessary for registering the
..>> token with the server.

Popular security mechanisms collect rumors and folklore like dust on a
Grayhound. I don't know which one you've heard, but it's way out of
date. Veteran ACE admins used to collect and share war stories and
myths about the original SecurID for a chuckle. Many myths seemed to
claim that the SecurID's serial number -- embossed on the back of most
tokens, and used (in the old 64-bit SecurID) only to link the remote
SecurID with the appropriate seed on the server, when a token is first
assigned to a user -- was a CrackerJack key to some mysterious crypto
backdoor into the SecurID's OTP output.

It wasn't true for the classic 64-bit SecurID, and it's certainly not
true for the AES SecurID.

(The SecurID's classic Brainard hash was reverse engineered and
published on the Net in 2001, to much hallaballo but little effect.
Two years later, in 2003, Scott Contini and Lisa Lin offered the first
of two independent analyses of the 18 year-old Brainard hash which
suggested that an extended statistical analysis of a large number of
token-codes might allow a token to be eventually cracked and cloned. A
year later, Biryukov, Lano, and Preneel brilliantly extended that
theoretical attack -- but there was never, ever, any claim by anyone
that they had actually cracked or cloned a 64-bit SecurID token.)

AES was finally certified in November, 2001. A little more than year
later, RSA replaced its proprietary Brainard hash with standard AES in
a new, DPA-resistant, SecurID design with a 128-bit secret key. RSA
sells 4 and 5 year tokens, but most SecurIDs are exchanged every three
years, so the churn quickly upgraded the installed base. (The classic
64-bit SecurID, when it was still sold, was hardened against the
statistical attack by simply pre-calculating each token's lifetime OTP
series and tossing out the rare collisions.)

The original SecurID put Current Time and its 64-bit secret though
John Brainard's one-way function to generate a series of 6-8 digit
token-codes that were continuously displayed on the SecurID's LCD.
21st century SecurIDs use AES, in standard ECB mode, with a true-
random 128-bit key, to encrypt:

- a 64-bit standard ISO representation of Current Time
(yr/mo/day/hour/min/second),
- a 32-bit token-specific salt (the serial number of the token), and
- another 32 bits of padding, which can be adapted for new functions
or additional defensive layers in the future.

These inputs, conflated and AES encrypted, now generate the series of
6-8 digit OTP token-codes that are displayed on a SecurID's LCD... or
perhaps in a browser toolbar. In the SecurID's trademark rhythm,
these token-codes typically roll over every 60 seconds.

ECB mode in AES is executed on 128-bit blocks, of course, so RSA had
to pad the standard 64-bit ISO expression of Current Time with another
64 bits. Using a token-specific 32-bit salt, the serial number,
blocks attempts to pre-calculate a library of possible token-codes for
all128-bit seeds. This, in turn, means that any brute-force attack on
the AES SecurIDs will have to target an individual token.

("The serial number?" I groaned, when I first saw the way the AES
SecurID protocol used the serial number as a salt. I just knew that
all those weird "serial number" myths would reappear, somehow
validated, despite anyone's protests or explanations. John Brainard of
RSA Labs, the wizard who designed and implemented the crypto for both
generations of the SecurID, laughed at me -- but he knew exactly what
I was thinking. He too collects SecurID war stories and myths.)

Millions of SecurID software apps are in circulation; downloaded,
free, from the RSA website. For SecurID software modules, the AES seed
or key (digitally signed) is what RSA actually sells. The RSA
Authentication Manager only accepts RSA-signed keys; and only a RSA
authentication server can support SecurIDs.

Mr. Ashwood also reported:
..>> For security the server must remain secure, its
..>> security has been questioned repeatedly.

I don't believe this is true. RSA recommends a hardened platform;
sells its appliance with a hardened platform (with unnecessary
services removed); and has always insisted that its authentication
server be a stand-alone machine.

In the first 20 years the SecurID was in the market, I can recall only
one incident in which a RSA authentication server was maliciously
compromised, and that was by a trusted administrator, an inside job.
If there were other incidents in which the OS platform was
compromised, I wouldn't be shocked -- but I suspect it is a fairly
rare occurrence. (Reports of any incidents would be very welcome.)

..>> For the security currently offered it depends on the
..>> secrecy of the ID number, it can be argued that this is
..>> actually SAFER with a PDA because at least the
..>> attacker has to dig around the PDA instead of just
..>> checking the back.

Duh? With respect, Joe, this isn't up to your normal standard of
informed tech analysis. The SecurID serial number is all but
irrelevant, a mere salt; the essential security lies in the 128-bit
AES key alone.

..>> SecurID is certainly safer than password alone, but
..>> SecurID is one more in a long series of not quite
..>> complete solutions. For what it is, SecurID is a
..>> wonderful solution, but just as with any other tool if you
..>> attempt to apply SecurID to the wrong situation it will
..>> be insecure.

No one with any sense is going to argue with your last line. ;-)

The SecurID, like all OTP tokens, is only offering a basic
authentication function, and that, in itself, is nothing near a
"complete solution." That's what an IT architect's overlapping layers
of security, each less than perfect, are all about, right? A dynamic
security culture is an evolving process, for the token-holder, the
issuer, and the vendors too.

Suerte,
_Vin

PS. I beg the indulgence of the newsgroup for the length of my
comments. I had a lazy rainy day, and the topic intrigues me. My
bias, I trust, is overt.
.



Relevant Pages

  • Re: RSA SecureID on Solaris
    ... Your tokens are provided with a floppy disk which contains an encrypted ... In fact it depends of the agent and the type of the token. ... SecurID PINPAD and Software SecurID where Pincode is given to ... some of them use securID authentication to ...
    (Focus-SUN)
  • Re: Time to ask again: Is there anything BETTER than eBay?
    ... Just a footnote on the two-factor authentication tokens mentioned ... Rob said that he already has two RSA SecurID tokens that he uses at ... validate the token-code displayed on a particular SecurID at any given ...
    (uk.people.consumers.ebay)
  • Re: M$ feature
    ... 128-bit AES-based SecurID, which is even resistant to DPA attacks. ... issued OTP tokens to their staff and customers are now replacing those ... authentication ceremony, even in the presence of malware. ...
    (uk.misc)
  • Re: [fw-wiz] Username password VS hardware token plus PIN
    ... > I think the best you can get is SecureID/ACE (used to be AXENT, now RSA?) ... SecurID is unrelated to AXENT's product, ... I converted from the old X9.9/Axent challenge-response tokens after the ... a password-expiration-style PIN change. ...
    (Firewall-Wizards)
  • RE: [fw-wiz] securid AES tokens
    ... > Does anyone know exactly how do AES securid tokens work? ... use, as you know, requires two-factor authentication: the token-holder is ...
    (Firewall-Wizards)