Thursday, March 29, 2012

Gender disparity in tech: Free classes bucking the trend?

I recently facilitated a class on Object Oriented Programming with Java. I helped coordinate with the teacher and students to get everyone together at the right time and place for the class, and pass messages to/from the teacher. The class was free and community-organized so there was no traditional academic structure. Anyone who wanted could come.

Half way through the class I noticed something interesting.

At least 75% of the class was made up by women. When asked, most of the students (both male and female) had practically zero programming experience and mostly worked in web development or graphic design.

I was trying to figure out why such a class would bring so many women when all I usually hear is how women are under-represented in tech fields. Then I remembered how most of the classes put together by this organization have a significant female attendance, on average. Almost all of the advertising for these classes is word of mouth, and since many of the organizers are female it makes sense that their social networks would comprise at least partially of women or women-centric user groups.

Now i'm thinking more about the dynamic that males have with each other and how that relates to them getting into the tech field. Would they be less likely to jump into classes if they knew they'd be the only woman in the class? Some might say no, but i've been in classes with all women and it can be a little intimidating for me, a guy. If the entire tech field was dominated by women, would there be a stigma against men getting into tech, because it might mean men are being "girly"?

Of course this was just one class so it's impossible to take anything solid away from something like how many people attended. But my guess is if there were women teaching classes to women, there might be a bigger turn out than is expected in more typical academic settings.

Tuesday, February 28, 2012

hacking

Hacking is not programming.
Hacking is not learning.
Hacking is not making.
Hacking is not sharing.
Hacking is not networking.
Hacking is not something you do.

Hacking is how you do something.

Saturday, February 11, 2012

Captive Portal Security part 1

CAPTIVE PORTAL SECURITY, pt. 1

by Peter Willis


INTRODUCTION

This brief paper will focus on the security policies of networks which require payment or web-based authentication before authorization. The topics of discussion will range from common set-up of captive portals, the methods available to circumvent authorization and ways to prevent attacks. A future paper will go over the captive portals themselves, their design and attack considerations.

TABLE OF CONTENTS

I. CAPTIVE PORTAL OVERVIEW
II. ATTACK METHODOLOGY
III. DEFENSE TACTICS
IV. SUMMARY
V. REFERENCES


I. CAPTIVE PORTAL OVERVIEW

What is a captive portal? A captive portal (or CP) is a generic term for any network which presents an interface, usually via a website, to authorize a user to access a network or the internet. Typically these will be in the fashion of a website where one accepts an agreement not to tamper with the network. It can also cover college campus log-in screens, corporate guest account access, for-pay wifi hotspots, etc.

In this paper I will mostly be discussing the "wifi hotspot" form of CP, where either after payment, user authorization or clicking on an "Accept" button will get you access to the internet. Many attacks to circumvent these networks rely on remote dedicated servers and thus won't be suitable to bypass all forms of CPs (as some networks are intended to grant access to internal-only networks).


II. ATTACK METHODOLOGY

There are many different attacks one can perform to circumvent a CP, ranging from the simplistic to complex. Each has its benefits and weaknesses, relating to performance, reliability and possibility of detection by IDS. I will briefly outline the methods here with more examples/detail given during the paper.


1. Tunnel over DNS

This method involves tunneling network and transport-level packets encoded over DNS records. The reason this attack works is due to the design of the DNS system. A domain or sub-domain can delegate where the answers for their DNS records come from using an NS record. The NS record points to your 3rd-party server. The useful part for bypassing CPs comes in here because most caching nameservers (like those on a CP) must forward on requests to your server if they want to be answered properly, which means we can use the CP as a sort of DNS proxy. Set up a custom nameserver that can pack and unpack packets and a client to transmit them and you have yourself a two-way connection with a server on the internet. Make an SSH connection through it and you have a secure tunnel.

One tool used to exploit this property of DNS is iodine[1]. This tool has enhanced security and methods to auto-detect the type of network it passes through and get the best connection possible. Unfortunately it doesn't work as well in the real world as it does on paper.

The original implementation, OzymanDNS by Daniel Kaminsky, is (once modified) actually the most reliably working solution, even if it works as a presentation layer instead of tunneling raw IP packets. This actually gives a couple benefits in that it can be run by an unpriviledged user (on the client side) and is modular/easy to hack on (written in perl). I have modified a version of this code[2] to optimize it and reduce CPU use, but it's not pretty: an average of about 7 kilobits per second is what you'll see. But if you really need brief access it works just about anywhere.


2. Tunnel over ICMP

Surprisingly difficult to exploit in the real world, this protocol offers an alternative tunnel through a firewall for those that don't block it. Networks use ICMP for all kinds of things, most notably passing errors and routing information when necessary as well as the ubiquitous 'ping' messages. If you can't ping there may still be alternative methods to tunnel data through this protocol, but in general the ping is the simplest.

The attack works like this: send an ICMP message (which being layer 3 is not always restricted the way IP sub-protocols are) with an IP or other packet encoded in its payload similar to tunneling over DNS. The benfit of this method is (when it works) it can provide a highly reliable connection without excessive lag (around 24 kilobits per second bandwidth on average). The tool icmptx[3] works well.

A bigger problem looms with IPv6 though, as there is a newer ICMP (icmpv6) which imposes restrictions on the number of icmp packets per second - around 6 per second. This seriously hampers the performance of any application trying to tunnel data through the protocol, but luckily nobody uses IPv6 yet. In general most CPs seem to block ICMP packets.


3. Firewall pinholes

Seldom probed by anyone but the most desperate CP hackers, most firewalls do allow one or more tcp or udp ports outbound to remote hosts (usually on the internet). These will allow you to tunnel an arbitrary protocol as long as it works over the transport protocol of the pinhole.

My favorite of these is tunneling OpenVPN tunnels over udp port 53 - the DNS port. Some CPs allow port 53 outbound to any host on the internet without checking to see if it's passing DNS requests, which is a big mistake. Unfortunately some CPs intercept and rewrite any traffic going over port 53 to provide their own custom DNS responses which breaks this method.

Sometimes strange high-numbered ports are open to the remote host. The simple way to tell this is to run a tcpdump or custom application on the remote host and scan every port from the CP's network to identify the one that's open. With an automated script this can be accomplished easily and once a hole is found the tunnel will provide full bandwidth to the attacker. For attacking CPs that allow access to internal networks you may be able to craft specific packets which when successfully passed through present a different response, thus enumerating open ports.


4. Transparent proxies

Often times the best holes come from an incorrectly set-up network proxy. Often the ACLs on HTTP proxies are set up only enough to block basic requests and don't handle the whole spectrum of possible HTTP requests. Sometimes you can use the proxy as simply as configuring the CP's gateway IP in your browser's proxy settings. However, sometimes a 3rd party server will help us with the extra mile of getting through their ACLs.

The simplest attack in this case is simply looking for an open proxy on some host on the network. Nmap[4] is bundled with scripts which will quickly detect open proxies. Sometimes one needs to abuse the HTTP specifications to find a hole in a particular implementation of an HTTP proxy. Some ACLs can be bypassed by merely changing CRLF to LF in your request or using a different HTTP method. Some authentication/authorization software even have rules in place to allow you to bypass authorization by adding a "?.jpg", "?.css", "?.xml" or other extention to your request. Sometimes header injection can be used to provide a quick jump through a backend proxy.

However, this may only give you HTTP or HTTPS access through the CP. To pass arbitrary data through the proxy it must allow you to use the CONNECT method to reach a remote host. Usually this is allowed because the SSL port (tcp port 443) needs to transmit encrypted packets which are only passed through a proxy with the CONNECT method. If a CONNECT is supported we can take advantage of a 3rd-party server on the internet to tunnel arbitrary packets over HTTP.

An HTTP proxy is set up on the internet which passes connections on to an internal SSH server on port 22. A client then uses proxytunnel[5] to connect to the CP's gateway and from there issues a CONNECT to the 3rd-party proxy, usually on port 443 (but port 80 is a good alternative to have set up). The final request is made to CONNECT to that machine's ssh port. If all goes well you should be able to use this new connection with your SSH client and forward arbitrary data over it. All the CP sees is HTTP or HTTPS traffic being forwarded from one proxy to another. The end result is a high-bandwidth tunnel through the proxy.

A tool[6] included with this paper will allow you to quickly probe ports found on a CP gateway for common problems in HTTP implementations and look for a way out.


5. MAC spoofing

The last method is the only one which is present in all wireless and some wired CPs by design. Once a host is authorized by the CP, its mac and IP address are allowed unrestricted access. All one needs to do is sniff traffic on the network, find a host that is authorized, and spoof its IP and mac address. Spoofing a mac is dependent on your network card and driver but most modern network devices today support it. The downside of course is you have to observe someone already authenticated, but in places such as a crowded airport lobby this may be less difficult than it seems.


6. Miscellaneous

Of course there are many more methods to circumvent CP access with varying degrees of success. Other methods include use of fuzzers, creative source routing/probing, the abuse of cisco protocols for proxy clustering, and the abuse of "convenience" features of some authentication servers.


III. DEFENSE TACTICS


I can't tell you how to perform these attacks without at least trying to tell you how to fix your own broken networks, so here's the white-hat portion of the paper.

First of all, block all unauthenticated traffic (of any layer) destined for the internet. There's no reason for any client to be able to pass any packets without being authed, so just put that rule at the top of your firewall. That'll stop firewall pinhole attacks and IP over ICMP.

To stop IP over DNS, simply have all DNS requests from unauthed sources return the same record every time... The IP of your CP gateway. Not only will this work fine with your transparent HTTP proxy (your proxy should be sending a redirect to your gateway's website anyway for authentication purposes), you can also run many service-specific proxies that will handle requests and inform the client to auth first via http. SSL won't work, but screw them for trying to be secure.

To stop HTTP transparent proxy abuse, pick a very secure and well-established proxy server that's stable and has all security patches and bug fixes up to date. Make sure you craft your ACLs to explicitly stop ALL requests coming from an unauthed host. It should only return the redirect to your authentication website. If for example the server differentiates between requests which are terminated by LF or CRLF, you should probaly write a lot more rules than less and cover all the possibilities.

For other methods of abuse, make sure your CP solution has no unneeded services or ports available on the network. These can be easy targets for attack (who here has checked if snmp is enabled on their gateway? ssh?) and the less attack vectors the less likely an attacker will get through. Audit all your systems to ensure they don't accidentally allow more access than is necessary.


IV. SUMMARY


Almost all CPs today can be bypassed in one way or another. Clearly there's a certain amount of risk vs reward where a company just doesn't care if the top 1% of users can gain access without authorization. That's good for us hackers, but can be bad for the CP provider.

Did you know that since its inception, Starbucks and McDonald's for-pay wifi had open squid proxies on every default gateway? All you needed to do was run an nmap scan to find it, set it in your browser's proxy list and surf for free. (Oh, and the payment network McDonald's uses to process credit cards uses the same network) Did you know there's a major mobile voice and data service provider which has not one but THREE holes in its service, allowing anyone to use their service for free anywhere in the USA with 3G coverage?

Even if Bradley Manning was not able to get a CDROM of US embassy cables out of a secured facility, he might have been able to find a way though network firewalls using these same techniques and tunnel the data out. It's not that far-fetched: as corporations increasingly lock the internet away from their employees, industrious hackers find new more challenging methods to circumvent their restrictions and get the access they want. With this in mind, those implementing the controls should be wary of the many methods of attack involved and the potential for abuse.

This paper has not covered the possibility of attacking the web applications which authenticate users trying to gain access to the network. I am not a web application pen tester so i'll let someone else review those holes.


V. REFERENCES

1. iodine - http://code.kryo.se/iodine/

2. dnstunnel - http://psydev.syw4e.info/new/dnstunnel/

3. icmptx - http://thomer.com/icmptx/

4. nmap - http://nmap.org/

5. proxytunnel - http://proxytunnel.sourceforge.net/

6. quickprobeportal.pl - http://psydev.syw4e.info/new/misc/quickprobeportal.pl

Thursday, January 12, 2012

Hackerne.ws DNS temporarily broken

If you're going to update DNS, use a tool that sanity checks your configuration as well as running your zones in a sandbox before deploying them. Otherwise this happens and your site goes down:

willisp@darkstar ~/ $ dig hackerne.ws

; <<>> DiG 9.4-ESV-R4 <<>> hackerne.ws
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43879
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;hackerne.ws. IN A

;; ANSWER SECTION:
hackerne.ws. 16 IN CNAME 174.132.225.106.

;; Query time: 80 msec
;; SERVER: 150.123.71.14#53(150.123.71.14)
;; WHEN: Thu Jan 12 09:57:05 2012
;; MSG SIZE rcvd: 58

They soon fixed the problem so i'm not trying to give them too hard a time, but it's a good lesson in why even modest sites should do quality control for all production-touching changes. Unless you're really familiar with DNS the above mistake might get overlooked quickly while troubleshooting.

Wednesday, December 28, 2011

thought for making development-to-production code pushes less prone to bit rot

you know that familiar problem where your development environment isn't in sync with the production servers? you upgrade the software so you can test new features, you push to production, and HOLY SHIT the site breaks because you weren't developing on the same platform. that is, if you aren't already doing the fucking upgrade polka just trying to get the newer software shoehorned into the old-ass legacy PROD machine. here's a potential fix.

you keep your codebase locked to your system configuration. if a dev server's software changes (or really, if anything on the system changes), you take a snapshot and you force the code to be tagged or whatever to the new snapshot. in this way the code is always in line with any given system configuration so you know what code works with what system configuration.

also, you always keep a system that matches your production machine. your HEAD development tree may be a wild jungle but only code you commit to the branch that is the same as the production machine can be tested, and only the development machine with the exact same system configuration as the production machine can test the code. so you will have your HEAD dev system and your PROD dev system, and the PROD dev system will mirror the PROD machine, not the other way around. you can call this "QC"/"QA" if you want but dev systems usually have local edits and don't do normal deployments and other bullshit bit rot creep.

so on the HEAD machine developers can test whatever the fuck they want, but until it works on the PROD dev machine it can't be deployed to PROD. this will also force you to actually do unit testing and other fun shit to prove the PROD dev code works as expected before you can deploy it. yay!

Wednesday, December 21, 2011

FUCK ANDROID

AAARRRGHHH.

I need to make a phone call to a local business right now. I can't, because every time I dial the number and press 'Dial' the "3G Data" option window pops up. It simply will not dial the number. I have put up with your shit, Android, and i'm done with it.

This isn't the first problem you've had. Your apps seem to crash daily, or one app sucks up "35% CPU" and makes every other app lag like my grandmother in molasses. Stock apps like the Browser and Maps can bring the whole thing to it's knees. And in these weird states, on the few times I actually receive a phone call, I can't swipe to answer it because the UI is too lagged. Let's not even talk about the native text messaging, which is not only the laggiest SMS i've ever used but the first phone to actually fail to send SMS's on a regular basis.

Google's apps in particular seem to suck. Google Voice takes about 30 seconds just to refresh the text messages once I open the app. Maps has weird bugs so that if I lock the screen while viewing the map, the map freezes and I have to kill it and restart it. Randomly the whole device will just appear sluggish even if I haven't been using it. And some apps become impossible to uninstall, becoming nag-ware for registration or payment.

A PHONE SHOULD BE A PHONE. All I wanted was to have GPS, Maps and Browsing built into my phone, and maybe a nice camera (but years later apparently Sony is the only company capable of putting a decent camera in a phone). But that was too much for you, Android. You had to be fancy. And now i'm throwing you away.

Nobody should be making Web Apps

let's face it. you're doing it wrong. it's not your fault; the rest of the world told you it was okay to try to emulate every aspect of a normal native application, but in a web browser. don't fret, because i'm about to explain why everything you do is wrong and how you can fix it. but first let me ask you a question.

what do you want to accomplish?

A. producing a markup-driven portable readable user agent-independent interpreted document to present information to a user?

B. letting a user interact with a custom application with which you provide services and solutions for some problem they have for which they don't have a tool to solve it?




if it's A you chose the correct platform. a web browser is designed to retrieve, present and traverse information through a vast network of resources. it has the flexibility, speed and low cost of resources to let you pull tons of content quickly and easily. after all, we all have at least a gigabyte of RAM. you should be able to browse hundreds of pages and never max out that amount of RAM - right?

if it's B this is the wrong choice, and for a simple reason: a browser is not an application platform. it was never designed to provide for you all the tools you need to support the myriad of applications' needs. imagine all the components of an operating system and what it provides to allow simple applications to do simple things. now consider a web browser and what it provides. starting to get the picture? here's a simple comparison: an operating system is a fortune 500 company and a web browser is a guy with a lemonade stand. no matter how many 'features' he can sell you, the super low-calorie healthy organic sweetener, the water sourced from natural local clean purified streams, whatever: it's still lemonade.




technical reasons why web apps are dumb:
  1. in a very literal sense the browser is becoming Frankenstein. slow, kludgy, gigantic, unstable, a security risk.
  2. verifying if my credit card number was typed in correctly is fine, but javascript should never run actual applications or libraries.
  3. applications that can interact with the local machine natively can do a wide array of things limited only by your own security policies and the extent of your hardware and installed libraries (which can be bundled with apps). web apps have to have the right browser installed, the right version, and compete with whatever other crap is slowly churning away, restricted by hacked-on browser security policies designed to keep your browser from hurting you.
  4. web applications are not only sensitive to the user's browser & network connection, they require your server backend to provide most of the computation resources. now not only can a user not rely on the application as much, you have to put up the cost of their cpu & network time, which is much more difficult than it is expensive when you really start getting users.
  5. the user doesn't really give a shit how their magical box provides them what they want. they just want it immediately and forever and free. so you're not really tied to using the web as long as you can provide them the same experience or better.
  6. seriously - Web Sockets?! are you people fucking insane? why not a Web Virtual Memory Manager or Web Filesystems? or how about WebDirectX? ..... oh. nevermind. *headdesk* i can't wait for Real-Time Web Pages.




i know what you're saying: what the hell else am i supposed to do? make native apps? i would compare the smartphone mobile app market to the desktop app market but the truth is it's ridiculously easier to bring in customers for mobile apps. and yes it's probably ten times easier building web apps with all the fancy friendly frameworks that can be tied together to push out new complete tools in hours instead of days or weeks. but that's also no excuse because it's all just code; we could build easy frameworks for native or mobile apps too. what is the alternative? is there one?

i don't think there is. Yet. you see, where the web browser fails us we have an opportunity to create a new kind of application. something that's dynamic and ubiquitous yet conforms to standards. something easy to deploy, cross-platform and portable. something using tools and libraries implemented in fast native code. something with an intuitive interface that exposes a universal "store front" to download or buy applications to suit our needs. something local AND scalable. sounds like a pipe dream.

maybe we can't have everything. but i see pieces of this idea in different places. when i look at Steam i see most of what's necessary for a store for applications, a content delivery system, a (mostly) secure user authentication mechanism. if it were possible to take the simplicity of Python (but you know, without the annoying parts) and make it reeeeallly cross-platform by design, then produce simple frameworks to speed up building of new complete tools.




the last thing you'd need is a way to make it sexy enough for everyone to pick up and start using. there's the difficult part. it seems to me that the competition of a few major players and the evolution of standards for new web technology is what led the arms race to bring "web apps" as the most ubiquitous computing platform for user interaction (next to mobile apps). that and the trendy, almost generation-specific explosion of investment of time in javascript-based frameworks led everyone to just build web apps by default. the new solution has to be needed for something for anyone to pick it up. you could start it as a browser pet project, but it seems uncertain whether other browsers would pick up the technology or wait it out.

this is where my sleep deprivation and the hour's worth of work i need to put in makes me ramble more than usual. my main point here is: make it easy, make it convenient, and make it somehow better than what we've had before. the end goal is of course, to stop creating bloated-ass crazy insecure web browsers that threaten our financial and personal lives and instead make stable, powerful applications which don't need a specific kind of browser or class of machine to run (necessarily).

bottom line: browsers aren't an operating system and the world wide web is not the internet. the former merely is part of the latter.




(disclaimer: i don't write web apps)