3864
Comments (240)
sorted by:
You're viewing a single comment thread. View all comments, or full comment thread.
2
HocusLocus 2 points ago +2 / -0

You have explained it VERY well -- OSI 4 says it all. Sucks. Without a front end that completely accepts the https request headers and performs cookie-based whitelisting there is no way I could easily be declared a 'friendly'.

Now it just so happens that my public IPv4 source could identify me as friendly, but I know there are so many massively-NATed networks out there that bad actors could hide behind.

I am assuming Cloudflare does have the customer's https privates and does have the ability to see through TLS if they wanted to, right? Because they have to accept all connections and then hand off the good ones? That would be necessary to issue temporary redirects with Cloudflare junk added to the ?x=x request string.

When I think of session whitelisting it goes by stages of increasing desirability,

  • Website provides Front End with unique whitelist strings. This becomes ridiculous very quickly if there are thousands of logged-in users.
  • better, Website supplies Front End with a public cert and 'signs' session IDs with private key. Front end now does not need to 'match' unique session strings for every user, all they are doing is verifying signatures.
  • When I suggest RSA for this purpose, it need not be the insanely large primes we use for total cryptographic assurance, even smaller keylength could be effective, especially if Website could evolve certificates (that are only being shared between Website and Front End) in quickly.

If session whitelisting could be achieved, in this or the next generation of front end infrastructure, then Website (such as TDWIN) might... instead of trying to identify bad logged-in actors when there is a flood, maintain a 'good citizen score' of logged-in users over time that eventually declares them human.