Discussion:
best practice for creating a CA cert?
Jason Haar
2014-09-29 06:59:29 UTC
Permalink
Hi there

Due to the upcoming Google instigated phasing out of SHA-1, I'm looking
at creating a new enterprise CA (ie internal only)

If I just "click through" the defaults of "openssl ca", I'd probably end
up with a 2048bit RSA, SHA-2 (256) cert. So my question is, should I
future proof that by making it 4096bit and maybe SHA-2 (512)? (ie I want
the CA to be viable for 10 years, not 5 years). What is the performance
impact of increasing these values of the CA cert itself? I'd expect to
still only sign 2048-bit, SHA-256 server/client certs - but is there a
real performance downside to making the CA cert itself stronger? I don't
care if the CA takes 30 seconds longer to sign a cert - but I'd really
care if it made a web browser hang when talking to the resultant server
cert ;-)

Thanks!
--
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users-MCmKBN63+***@public.gmane.org
Automated List Manager majordomo-MCmKBN63+***@public.gmane.org
Kyle Hamilton
2014-09-29 07:26:21 UTC
Permalink
Generally, a client doesn't bother checking a certificate that's in its local trust store. The idea is, if it's in its trusted store, there's no need to verify its integrity, because the administrator already performed that verification.

Where this might have an impact is if your new certificate is cross-certified by another organization's root. You'll have to judge for yourself how likely this scenario might be for your environment.

-Kyle H
Post by Jason Haar
Hi there
Due to the upcoming Google instigated phasing out of SHA-1, I'm looking
at creating a new enterprise CA (ie internal only)
If I just "click through" the defaults of "openssl ca", I'd probably end
up with a 2048bit RSA, SHA-2 (256) cert. So my question is, should I
future proof that by making it 4096bit and maybe SHA-2 (512)? (ie I want
the CA to be viable for 10 years, not 5 years). What is the performance
impact of increasing these values of the CA cert itself? I'd expect to
still only sign 2048-bit, SHA-256 server/client certs - but is there a
real performance downside to making the CA cert itself stronger? I don't
care if the CA takes 30 seconds longer to sign a cert - but I'd really
care if it made a web browser hang when talking to the resultant server
cert ;-)
Thanks!
--
Cheers
Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
______________________________________________________________________
OpenSSL Project http://www.openssl.org
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Jakob Bohm
2014-09-29 12:45:11 UTC
Permalink
Out of general interest,

Assuming a "low e" (such as e=65537) RSA public key, how big is the
cost of going from a 2048 bit to a 4096 bit modulus for an
intermediary CA, given that verifications will significantly
outnumber signings for a CA key?
Post by Kyle Hamilton
Generally, a client doesn't bother checking a certificate that's in
its local trust store. The idea is, if it's in its trusted store,
there's no need to verify its integrity, because the administrator
already performed that verification.
Where this might have an impact is if your new certificate is
cross-certified by another organization's root. You'll have to judge
for yourself how likely this scenario might be for your environment.
On September 28, 2014 11:59:29 PM PDT, Jason Haar
Hi there
Due to the upcoming Google instigated phasing out of SHA-1, I'm looking
at creating a new enterprise CA (ie internal only)
If I just "click through" the defaults of "openssl ca", I'd probably end
up with a 2048bit RSA, SHA-2 (256) cert. So my question is, should I
future proof that by making it 4096bit and maybe SHA-2 (512)? (ie I want
the CA to be viable for 10 years, not 5 years). What is the performance
impact of increasing these values of the CA cert itself? I'd expect to
still only sign 2048-bit, SHA-256 server/client certs - but is there a
real performance downside to making the CA cert itself stronger? I don't
care if the CA takes 30 seconds longer to sign a cert - but I'd really
care if it made a web browser hang when talking to the resultant server
cert ;-)
--
Jakob Bohm, CIO, partner, WiseMo A/S. http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark. direct: +45 31 13 16 10
<tel:+4531131610>
This message is only for its intended recipient, delete if misaddressed.
WiseMo - Remote Service Management for PCs, Phones and Embedded
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users-MCmKBN63+***@public.gmane.org
Automated List Manager majordomo-MCmKBN63+***@public.gmane.org
Michael Sierchio
2014-09-29 14:30:27 UTC
Permalink
Post by Jason Haar
...
If I just "click through" the defaults of "openssl ca", I'd probably end
up with a 2048bit RSA, SHA-2 (256) cert. So my question is, should I
future proof that by making it 4096bit and maybe SHA-2 (512)? (ie I want
the CA to be viable for 10 years, not 5 years). What is the performance
impact of increasing these values of the CA cert itself? I'd expect to
still only sign 2048-bit, SHA-256 server/client certs - but is there a
real performance downside to making the CA cert itself stronger? I don't
care if the CA takes 30 seconds longer to sign a cert - but I'd really
care if it made a web browser hang when talking to the resultant server
cert ;-)
There are many places where a PKI breaks - hash collisions are far
down the list.

Most internal CA implementations offer no more effective security or
trust than just using self-signed certs - the objective seeming to be
to make browsers not complain about the SSL connection. Without
subsidiary CAs, good discipline about their use, a CRL distribution
point baked into certs (or OCSP), you can only verify that a cert was
valid when it was signed, but have no way of dealing with private key
compromise, etc. which happens all the time.

Spend some time thinking about revocation, cert lifespan, etc.if you
want to make a CA "stronger."

- M
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users-MCmKBN63+***@public.gmane.org
Automated List Manager majordomo-MCmKBN63+***@public.gmane.org
Salz, Rich
2014-09-29 15:05:17 UTC
Permalink
There are many places where a PKI breaks - hash collisions are far down the
list.
What he said.

4KRSA -SHA-256 is fine for a CA. Most likely operational issues will cause you problems. Invest in an HSM.

--
Principal Security Engineer, Akamai Technologies
IM: ***@jabber.me Twitter: RichSalz

�zt�,����-��i��0Š^��%����Һ�h���X������^��%�ǫ��(z��e��F����)��br ���+
Jason Haar
2014-09-29 20:15:55 UTC
Permalink
There are many places where a PKI breaks - hash collisions are far
down the list. Most internal CA implementations offer no more
effective security or trust than just using self-signed certs - the
objective seeming to be to make browsers not complain about the SSL
connection. Without subsidiary CAs, good discipline about their use, a
CRL distribution point baked into certs (or OCSP), you can only verify
that a cert was valid when it was signed, but have no way of dealing
with private key compromise, etc. which happens all the time. Spend
some time thinking about revocation, cert lifespan, etc.if you want to
make a CA "stronger."
Whoa! Big assumptions in there batman!!! Don't for a minute assume you
have any understanding about how we use said CA cert. Yes, all of that
was thought through 12 years ago when we started doing this. In my
experience, our company has been one of the few enterprise environments
where a PKI has actually fundamentally improved our security posture,
and it was ENTIRELY through focusing on processes - not the technology!

(sheesh, ask a simple question... ;-)
--
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users-MCmKBN63+***@public.gmane.org
Automated List Manager majordomo-MCmKBN63+***@public.gmane.org
Gregory Sloop
2014-09-29 21:08:16 UTC
Permalink
There are many places where a PKI breaks - hash collisions are far
down the list. Most internal CA implementations offer no more
effective security or trust than just using self-signed certs - the
objective seeming to be to make browsers not complain about the SSL
connection. Without subsidiary CAs, good discipline about their use, a
CRL distribution point baked into certs (or OCSP), you can only verify
that a cert was valid when it was signed, but have no way of dealing
with private key compromise, etc. which happens all the time. Spend
some time thinking about revocation, cert lifespan, etc.if you want to
make a CA "stronger."
There are many places where a PKI breaks - hash collisions are far
down the list. Most internal CA implementations offer no more
effective security or trust than just using self-signed certs - the
objective seeming to be to make browsers not complain about the SSL
connection. Without subsidiary CAs, good discipline about their use, a
CRL distribution point baked into certs (or OCSP), you can only verify
that a cert was valid when it was signed, but have no way of dealing
with private key compromise, etc. which happens all the time. Spend
some time thinking about revocation, cert lifespan, etc.if you want to
make a CA "stronger."
Hoping this doesn't veer into a flame-fest [it's intended in the most friendly way possible.] but, I think there are two non-mutually exclusive factors at play. [Actually I'm quite sure there are more, but these are two that I see.]

1) As mentioned, operational flaws, not technical ones are often the ones that "get" people/organizations, and it's easy to focus on "technical" issues [practically exclusively] to our detriment.

So, worrying *too much* about your hashing method [i.e. SHA1 vs SHA-4096] is probably a bad endeavor.
The reason: A persistent threat to your organization is likely to find an operational flaw and penetrate your organization through some other means...it probably won't be by breaking SHA1. [For example you're a high value industrial espionage target.]

...BUT...

2) Say, 10 years from now, with a lot more horsepower, an attacker who just happens to have the data stream from an old session, tries to make a run at that historic data. Why would we want to make it easy for the attacker by using SHA1? Sure, the likelihood of such an attack is small, but spending at least *some* time in re-thinking how well our choices might stand the test of time probably makes sense. [20 years ago, probably a lot of people would have said. "DES - well, yeah, it's theoretically possible someone will break it, but why are you focusing on that!?!"]

So, I do see merit in the question. It's not an either/or sort of game. We can chew gum and walk at the same time. [At least usually...]

IMO, focusing *purely/exclusively* on operational *or* technical methods is a poor choice.

Spending some time re-assessing the security of generated certs and what's currently possible and likely to work for a large percentage of clients/hosts makes sense.

As I see it, even discussing all the ramifications of operational security for a particular situation is very time consuming and, IMO, really quite difficult to do well on the list since each persons/organizations needs and capabilities are so very different. Discussing technical issues tends to be a lot more modular and "easy." [And perhaps it's this ease that lulls us into spending too much time on it.]

So, while I think the advice below is excellent *general* advice, I think it does a bit of a dis-service to the OP and those of us who just happen to be looking at these issues at the moment. [And it avoids answering the original question in any detail. And unfortunately, I don't have the data to impart, so I'm not doing a lot better. :) ]


-Greg

Continue reading on narkive:
Loading...