if i were really smart i'd set up SELinux and use sandbox -X to create a secure sandbox for my Firefox and other networked applications. i still want to get around to this one day, but it seems too complicated for me to learn how to write all the rules necessary in 30 minutes.
instead i use a combination of tunneling and secure filtering to lock down my browsing session. first of all, all my traffic goes through an ssh SOCKS tunnel to a VPS i pay for (these can be as little as $4 with a 100GB or more bandwidth cap, so more than enough for general browsing needs). this immediately solves the "starbucks sniffer" problem, and thus my only worry left is the traffic from my VPS to the websites i'm connecting to. this works everywhere i have internet access, using my server-side HTTP-to-SSH proxy (example here and here) and proxytunnel (i need to check out corkscrew though).
for those connections and the content delivered from them i have an assortment of Firefox plugins. my current list of Firefox plugins are: Adblock Plus, Cert Viewer Plus, Certificate Patrol, Expiry Canary, facebooksecurelogin, Force-TLS, HTTPS-Everywhere, NoScript, Safe, SSL Blacklist, and WOT.
the end result? i have a lot more insight as to what is going on behind the scenes as i browse the web. every time an old SSL certificate is replaced with a new one, i get a notification with a diff of the changes. when a site's certificate is about to expire i am notified, thus i'll have advance warning if a site could be potentially exploited or unavailable in the future. all connections to frequently-visited sites such as Wikipedia, Facebook, Google, PayPal, Twitter are forced to use SSL. if i connect to an HTTPS page, the border around my browser window is changed to green, certifying the whole page indeed is using SSL. if there is an element in the page which does not use HTTPS, the border is red. if i submit a form at any time and it doesn't go to an HTTPS url, i am warned before i can press submit. if any certificate uses MD5, i am warned. and when browsing google and other websites i am warned if the site has a low or bad rating, has been reported as a malware site, etc (and it's usually right on the money). of course with NoScript any site i don't explicitly trust can't load any potentially-malicious JavaScript, XSS attacks are prevented, and i can even force all cookies and javascript to use SSL to prevent interception or injection (ala Firesheep).
with all these protections i have much more visibility into whether a site i'm on could potentially have malicious content, and my interactions with these and other sites are inherently more secure. of course most of these plugins are only effective on the most popular sites by default, since complex rules often have to be written to allow specific requests to prevent complicated attacks. but at least we're starting to get more secure by default instead of less.
I'm a hacker and IT worker, and this is where I muse about broken enterprise networks, broken code and weird security holes (among other things)
Monday, November 1, 2010
Friday, October 29, 2010
hacking corporate store fronts
local business in america is kind of a quagmire. it seems that except for a few small areas where tons of self-interested stuck-up liberals take the initiative to completely force out corporate interests from a given city, corporations run things. most storefronts you find in america are cost-cutting franchises and subsidiaries of conglomerates. it's no wonder americans easily swallow any pre-packaged product sold to them: be it music, food, television, movies, games... it's all made to order with few variations and dumbed-down for everyone's generic tastes. (heh, it's kind of funny that those are the only things americans are interested in, too)
local businesses get pushed out by these bigger corporations, mostly due to extremely competitive prices. but local business could bring a lot of variety to consumers and in effect influence the entire culture wholly through local means, if it was done on a large enough scale. the question is, in this capitalistic dog-eat-dog country, how do you introduce local business when the whole economy is based on cutting them off at the knees?
i think big companies could start by taking their already extremely effective cost-cutting measures and branching them out to more specific tastes. i think that if you work closer with all the producers of "content" that you use to produce your products you can still keep a low cost and generate a higher variety of products. integrate more, produce with more efficiency.
it would be pretty simple in principle: for any given "metro area" or whatever you determine to be an area with a specific taste that you could market a certain regional product to, create a brand. then create a line of products that are "mostly" only sold by that brand. in this way not only do you create the appearance of originality and variety, you can hopefully win over the local populace and generate a kind of grassroots following for your band.
the goal here is to *not* allow people to associate your stores regionally/locally the way people do nationally. they should not be able to say "that's the mcdonalds of east texas." part of that is keeping your brand relatively small, but also making sure your products aren't overly cookie-cutter in nature. nothing turns people away from big business faster than the lack of a mom-and-pop appearance. you need to hire good people to help sell the brand, but your products also need to have a certain element of being created or finished in the store itself.
have you ever seen a national franchise which could, for example, cook an omlette made-to-order in two minutes for a customer? i don't think i have. there must be an expense associated with shipping fresh eggs, keeping them cool, allowing for a kitchen area to prepare the ingredients, etc. but sandwich/sub franchises do almost this very thing. quiznos franchises receive pre-cooked bread and ingredients and assembles them in a matter of minutes for its customers, and produces what i consider to be a fairly high quality sandwich for the price/time. so why can't we ship pre-mixed eggs and the same ingredients, throw them in a bowl, put it in a microwave or some other omlette-cooking machine and give people something fresh(ish) and made-to-order/home made?
all you'd need to do at that point is rename the store for a given region and customize the ambiance, and switch around the recipe a bit depending on the area. your store fronts gain the reputation of being a "local", original, consistent source for (hopefully) good products, and your customers gain the knowledge that they're not just buying the same old crap from a national chain, maybe even believing they are helping the local economy. (maybe they could even go so far as to put more reward in the hands of the local store owners/managers as to actually produce more good for the given region? but now i'm really dreaming)
local businesses get pushed out by these bigger corporations, mostly due to extremely competitive prices. but local business could bring a lot of variety to consumers and in effect influence the entire culture wholly through local means, if it was done on a large enough scale. the question is, in this capitalistic dog-eat-dog country, how do you introduce local business when the whole economy is based on cutting them off at the knees?
i think big companies could start by taking their already extremely effective cost-cutting measures and branching them out to more specific tastes. i think that if you work closer with all the producers of "content" that you use to produce your products you can still keep a low cost and generate a higher variety of products. integrate more, produce with more efficiency.
it would be pretty simple in principle: for any given "metro area" or whatever you determine to be an area with a specific taste that you could market a certain regional product to, create a brand. then create a line of products that are "mostly" only sold by that brand. in this way not only do you create the appearance of originality and variety, you can hopefully win over the local populace and generate a kind of grassroots following for your band.
the goal here is to *not* allow people to associate your stores regionally/locally the way people do nationally. they should not be able to say "that's the mcdonalds of east texas." part of that is keeping your brand relatively small, but also making sure your products aren't overly cookie-cutter in nature. nothing turns people away from big business faster than the lack of a mom-and-pop appearance. you need to hire good people to help sell the brand, but your products also need to have a certain element of being created or finished in the store itself.
have you ever seen a national franchise which could, for example, cook an omlette made-to-order in two minutes for a customer? i don't think i have. there must be an expense associated with shipping fresh eggs, keeping them cool, allowing for a kitchen area to prepare the ingredients, etc. but sandwich/sub franchises do almost this very thing. quiznos franchises receive pre-cooked bread and ingredients and assembles them in a matter of minutes for its customers, and produces what i consider to be a fairly high quality sandwich for the price/time. so why can't we ship pre-mixed eggs and the same ingredients, throw them in a bowl, put it in a microwave or some other omlette-cooking machine and give people something fresh(ish) and made-to-order/home made?
all you'd need to do at that point is rename the store for a given region and customize the ambiance, and switch around the recipe a bit depending on the area. your store fronts gain the reputation of being a "local", original, consistent source for (hopefully) good products, and your customers gain the knowledge that they're not just buying the same old crap from a national chain, maybe even believing they are helping the local economy. (maybe they could even go so far as to put more reward in the hands of the local store owners/managers as to actually produce more good for the given region? but now i'm really dreaming)
Monday, October 25, 2010
Why I think Devops is stupid
http://www.jedi.be/blog/2010/02/12/what-is-this-devops-thing-anyway/
First of all, this isn't a "movement." People have been trying for years to get quality sysadmins who are also competent programmers. I still believe that except for a few rare cases, these people do not exist. And they shouldn't: clearly, something is wrong.
If I told you I spend all of my time both becoming the best sysadmin I can be, and becoming the best programmer I can be, would you believe me? If so, I have a bridge to sell you. The fact is that when i'm a sysadmin I really don't program much at all. I spend my day at work fighting fires and performing odd jobs and when I get home the last thing I want to do is get back to the computer. And at work, if I spent most of my time researching new development trends and writing new tools in experimental languages, how much real sysadmin work am I doing? No, the truth is I wouldn't have enough time in the day to be both a full-time sysadmin and a full-time programmer. I can only do one job at a time.
"the Devops movement is characterized by people with a multidisciplinary skill set - people who are comfortable with infrastructure and configuration, but also happy to roll up their sleeves, write tests, debug, and ship features."
Sorry. I have a job. I don't want to have to do the developers' jobs too. I'm upgrading the Oracle cluster to RAC and being woken up at 3 AM because some bug somewhere deep in the site caused pages to load all funky, and i'm trying to figure out who committed the flaw and get them to revert it. Even if I wanted to, i'm a sysadmin; i'm not familiar with the developers' codebase, and sometimes not even the language they're writing it in. How the hell can you expect me to realistically debug it in real time? And writing tests? Really, you want me to write the developers' unit tests?
Don't get me wrong. I am fully in support of the general idea of better communication between groups and sysadmins working with developers, DBAs, QA, neteng, etc to build a better product. I think it'd be insane for any group to go about making any major changes without consulting every other group and working out any potentially negative ramifications. But this doesn't mean each group has to know how to do each other group's job. Communication is the key word here, not cross-pollination.
There are lots of technical issues that come up in the building of any product. To make it work as well as possible, there's lots of different problems which have to be accounted for. The problems cited in the above post - 'fear of change,' 'risky deployments,' 'it works on my machine,' 'siloization' - all require planning and cooperation to resolve. But this is basic stuff, to me. You don't need to be a DevOp to realize you're going to need your devs to have the same baseline system for testing their apps as your production system (sometimes more than one). The apps have to be developed in a way that allows for a smooth upgrade in the future. And you need a competent deployment and reversion system with change approval/code review and reporting.
These issues are not solved by simply having a 'DevOp', whose responsibility is not only their own systems but apparently the total management and architecting of the whole process of development of a product and delivering it working flawlessly. To properly deal with these issues you need many things. You need really strong management to keep teams working together and to help them communicate. You need some kind of manager or architect position who can keep track of how everything works and juggle the issues before they become serious problems. You need people who are really good at doing their job and get them to ask for help.
Nobody's job is simple. But creating some new position to supposedly solve all these issues by being super-human techno-gods? Even if you could get these godly Devops people in every corporation, there's no promise that they can even get past the politics inherent to each group to make everything work as harmoniously as the post describes. There is no magic bullet. No movement will make everything alright. The world is harsh and complex, and a DevOp isn't going to save it.
First of all, this isn't a "movement." People have been trying for years to get quality sysadmins who are also competent programmers. I still believe that except for a few rare cases, these people do not exist. And they shouldn't: clearly, something is wrong.
If I told you I spend all of my time both becoming the best sysadmin I can be, and becoming the best programmer I can be, would you believe me? If so, I have a bridge to sell you. The fact is that when i'm a sysadmin I really don't program much at all. I spend my day at work fighting fires and performing odd jobs and when I get home the last thing I want to do is get back to the computer. And at work, if I spent most of my time researching new development trends and writing new tools in experimental languages, how much real sysadmin work am I doing? No, the truth is I wouldn't have enough time in the day to be both a full-time sysadmin and a full-time programmer. I can only do one job at a time.
"the Devops movement is characterized by people with a multidisciplinary skill set - people who are comfortable with infrastructure and configuration, but also happy to roll up their sleeves, write tests, debug, and ship features."
Sorry. I have a job. I don't want to have to do the developers' jobs too. I'm upgrading the Oracle cluster to RAC and being woken up at 3 AM because some bug somewhere deep in the site caused pages to load all funky, and i'm trying to figure out who committed the flaw and get them to revert it. Even if I wanted to, i'm a sysadmin; i'm not familiar with the developers' codebase, and sometimes not even the language they're writing it in. How the hell can you expect me to realistically debug it in real time? And writing tests? Really, you want me to write the developers' unit tests?
Don't get me wrong. I am fully in support of the general idea of better communication between groups and sysadmins working with developers, DBAs, QA, neteng, etc to build a better product. I think it'd be insane for any group to go about making any major changes without consulting every other group and working out any potentially negative ramifications. But this doesn't mean each group has to know how to do each other group's job. Communication is the key word here, not cross-pollination.
There are lots of technical issues that come up in the building of any product. To make it work as well as possible, there's lots of different problems which have to be accounted for. The problems cited in the above post - 'fear of change,' 'risky deployments,' 'it works on my machine,' 'siloization' - all require planning and cooperation to resolve. But this is basic stuff, to me. You don't need to be a DevOp to realize you're going to need your devs to have the same baseline system for testing their apps as your production system (sometimes more than one). The apps have to be developed in a way that allows for a smooth upgrade in the future. And you need a competent deployment and reversion system with change approval/code review and reporting.
These issues are not solved by simply having a 'DevOp', whose responsibility is not only their own systems but apparently the total management and architecting of the whole process of development of a product and delivering it working flawlessly. To properly deal with these issues you need many things. You need really strong management to keep teams working together and to help them communicate. You need some kind of manager or architect position who can keep track of how everything works and juggle the issues before they become serious problems. You need people who are really good at doing their job and get them to ask for help.
Nobody's job is simple. But creating some new position to supposedly solve all these issues by being super-human techno-gods? Even if you could get these godly Devops people in every corporation, there's no promise that they can even get past the politics inherent to each group to make everything work as harmoniously as the post describes. There is no magic bullet. No movement will make everything alright. The world is harsh and complex, and a DevOp isn't going to save it.
Tuesday, October 19, 2010
utf8 terminals
UTF-8 lovin' for my terminals:
my damn fonts keep having a problem with chinese and other languages if i don't use the default font and size. luckily it's barely usable, but still pretty large. more application-specific details here.
(in bash)
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
(in irssi)
/set recode_autodetect_utf8 ON
/set term_type utf-8
/set term_charset utf-8
(for your terminal)
uxterm -sb -bg black -fg green -g 100x25
(for screen)
screen -U
(for tmux)
tmux -u
my damn fonts keep having a problem with chinese and other languages if i don't use the default font and size. luckily it's barely usable, but still pretty large. more application-specific details here.
Friday, October 15, 2010
do the legwork
in the various positions in the IT industry we all have a specific job to do with various tasks. we don't always do them as well as we could. usually it boils down to someone doing the bare minimum for a variety of reasons and something ends up breaking.
there are different reasons why things might not be done as well as possible. maybe the deadline's fast approaching and you just need something to work. maybe you've not got enough budget. maybe your bosses are just jerks and even though you tell them what you need to get it done right, they ignore you and force you to produce sub-standard work.
the resulting fail will sit in the background for some time until a random occurrence triggers it. by chance something goes wrong and then everyone breaks, and you're left holding the bag. sometimes that means big hassles and wasted money. sometimes it means you get fired. so when you do have the chance, take the time and do it right.
as far as security is concerned this principle affects everything. there are lots of things you can do to secure any given system. the more you do, the less likely it is that the one attacker you were working to stop will be successful in his or her objective. this applies to everyone in the IT field: programmers, admins, NOC, QA, analysts, managers, etc etc. if you do it all right the first time you won't be left with the bag.
so for example. if you work for a large mobile internet service provider and it's your job to set up the service paywall, don't skimp on anything. make sure it's as secure and reliable as possible and don't trust anything to chance. the one person who figures out that way for everyone in the country to get free internet could bring on considerable strain (financially and otherwise) to your employers, and they won't be happy with you.
or if you run the large systems which are targeted by drive-by botnets as command and control machines or injection points, do your jobs, people. apply the latest security-tightening patches. use mandatory access control. use chroots. use separate users for each service. remove the need to log in as root wherever possible. add intrusion detection. keep up with patches! do you know how much of a hassle it is to clean up and replace systems that have been owned en masse just because you allowed a simple shitty buffer overflow to execute?
and programmers, come on. you're never held responsible for these problems. it's always the other groups which are used as the example and who look foolish because of your crappy, insecure code. the code runs on their systems, so the perception is it's their fault they got owned. but they didn't write that shitty file-uploading php script, you did. you let the bot herders in the front door and made it that much easier for them to expand their attack into the network. congratulations, homie. yes, the admins should have tightened security around php to account for unexpected holes, but you shouldn't make it easier for the attackers either.
and firewall dudes: how hard is it to friggin download a malware watch list and block bad domains/IPs? you're responsible for both the servers AND desktops which are affected by worms/trojans/etc. you know how to tighten these boxes down and tighten up the network access, so do it already!
you're saving yourself work in the end. how many of us have been caught in a tight deadline when suddenly all work has to stop to deal with the intrusion and see how far it got? do you have the spare boxes and cycles to deal with that? how is it affecting your bottom line? your sleep schedule? in the end it's the executives and managers who need to be more proactive in enforcing these trends in the rest of the work force, because if they don't force people to then nobody's going to take the extra time. create a culture of polished work and everyone should benefit.
there are different reasons why things might not be done as well as possible. maybe the deadline's fast approaching and you just need something to work. maybe you've not got enough budget. maybe your bosses are just jerks and even though you tell them what you need to get it done right, they ignore you and force you to produce sub-standard work.
the resulting fail will sit in the background for some time until a random occurrence triggers it. by chance something goes wrong and then everyone breaks, and you're left holding the bag. sometimes that means big hassles and wasted money. sometimes it means you get fired. so when you do have the chance, take the time and do it right.
as far as security is concerned this principle affects everything. there are lots of things you can do to secure any given system. the more you do, the less likely it is that the one attacker you were working to stop will be successful in his or her objective. this applies to everyone in the IT field: programmers, admins, NOC, QA, analysts, managers, etc etc. if you do it all right the first time you won't be left with the bag.
so for example. if you work for a large mobile internet service provider and it's your job to set up the service paywall, don't skimp on anything. make sure it's as secure and reliable as possible and don't trust anything to chance. the one person who figures out that way for everyone in the country to get free internet could bring on considerable strain (financially and otherwise) to your employers, and they won't be happy with you.
or if you run the large systems which are targeted by drive-by botnets as command and control machines or injection points, do your jobs, people. apply the latest security-tightening patches. use mandatory access control. use chroots. use separate users for each service. remove the need to log in as root wherever possible. add intrusion detection. keep up with patches! do you know how much of a hassle it is to clean up and replace systems that have been owned en masse just because you allowed a simple shitty buffer overflow to execute?
and programmers, come on. you're never held responsible for these problems. it's always the other groups which are used as the example and who look foolish because of your crappy, insecure code. the code runs on their systems, so the perception is it's their fault they got owned. but they didn't write that shitty file-uploading php script, you did. you let the bot herders in the front door and made it that much easier for them to expand their attack into the network. congratulations, homie. yes, the admins should have tightened security around php to account for unexpected holes, but you shouldn't make it easier for the attackers either.
and firewall dudes: how hard is it to friggin download a malware watch list and block bad domains/IPs? you're responsible for both the servers AND desktops which are affected by worms/trojans/etc. you know how to tighten these boxes down and tighten up the network access, so do it already!
you're saving yourself work in the end. how many of us have been caught in a tight deadline when suddenly all work has to stop to deal with the intrusion and see how far it got? do you have the spare boxes and cycles to deal with that? how is it affecting your bottom line? your sleep schedule? in the end it's the executives and managers who need to be more proactive in enforcing these trends in the rest of the work force, because if they don't force people to then nobody's going to take the extra time. create a culture of polished work and everyone should benefit.
Friday, October 8, 2010
how NOT to design software for ease of troubleshooting
psypete@pinhead:~/svn/bin/src/etherdump$ svn up
At revision 211.
Killed by signal 15.
strace gives no indication wtf is going on and there's no debugging mode to give me more information. Of course this is subversion, so instead of a simple man page to give me some help I have to read through the 'svn book' or run commands 20 times to even know there's no debugging flags (afaict).
What is the actual problem and fix, after 15 minutes of googling? Some version bump re-introduced a bug (I didn't know I even upgraded subversion so perhaps this is some other thing making the bug pop up) that causes svn to kill itself if ssh isn't playing nice. Effectively you have to use "-q" anywhere that svn calls ssh, which was my weird tunnel subversion config change.
The tool could have spit out something like "hey, ssh is giving me shit, so i'm bailing; check out ssh" and it would have greatly decreased the time it took me to resolve the issue. Instead it just committed hara-kiri and told me a cryptic signal number. This is not how to design a tool made for user interaction.
Monday, October 4, 2010
a brief introduction to bad internet paywall security
for some reason everybody seems to leave some hole in an internet paywall you can go through to get free internet access. there are some obvious methods, and some less obvious methods. at the end of the day, though, you should be aware of all of these when you deploy one.
this one is a given. if you have a caching dns server/forwarder, ip over dns like iodine or NSTX will usually provide you with a somewhat-unstable-but-probable internet access. the fix is of course to just tell dnsmasq to point all lookups by an unauthorized client to your http server and provide an http redirect to the paywall site. apparently this is ridiculously hard for admins to comprehend.
if admins set their firewalls up right, there should be no packets originating from an unauthorized wifi client which can hit a host on the internet. apparently it's much easier to just allow any wifi client to connect to udp port 53 on a remote host without even using a real dns service to pass along the query. openvpn listening on port 53 becomes highly useful here. a creative hacker could use something like a google voice-powered SMS-controlled app to report back any SYN packets in a 10-minute window and just try all 65k ports to find an open pinhole in a firewall.
this one isn't nearly as likely to work as the last two, but when it does work it's much more stable a connection than ip over dns. examples are hans and ICMPTX. however it's usually rate limited to around 23kB/s in my experience (and it's probably much much much slower on IPv6, according to the spec only allowing something like 4 ICMP messages per second?), so if you can use a tunnel straight to a remote host without going through another protocol and its overhead, all the better.
so far i think i've only found one such proxy that successfully denies http requests to unauthorized users. people just don't seem to understand that even if your proxy doesn't have an IP address i can still use it. a very simple example is just doing
this is probably the easiest/most reliable method to get through a paywall. if someone else is already authed, just sniff the network, find their MAC and IP address, set it as your own, and start browsing. to be honest i don't ever use this method but it should work in theory. if they enforce WPA encryption it should make this method difficult to impossible, though i'm really not up to speed on all WPA attacks.
ip over dns
this one is a given. if you have a caching dns server/forwarder, ip over dns like iodine or NSTX will usually provide you with a somewhat-unstable-but-probable internet access. the fix is of course to just tell dnsmasq to point all lookups by an unauthorized client to your http server and provide an http redirect to the paywall site. apparently this is ridiculously hard for admins to comprehend.
tunneling out through firewall pinholes
if admins set their firewalls up right, there should be no packets originating from an unauthorized wifi client which can hit a host on the internet. apparently it's much easier to just allow any wifi client to connect to udp port 53 on a remote host without even using a real dns service to pass along the query. openvpn listening on port 53 becomes highly useful here. a creative hacker could use something like a google voice-powered SMS-controlled app to report back any SYN packets in a 10-minute window and just try all 65k ports to find an open pinhole in a firewall.
ip over icmp
this one isn't nearly as likely to work as the last two, but when it does work it's much more stable a connection than ip over dns. examples are hans and ICMPTX. however it's usually rate limited to around 23kB/s in my experience (and it's probably much much much slower on IPv6, according to the spec only allowing something like 4 ICMP messages per second?), so if you can use a tunnel straight to a remote host without going through another protocol and its overhead, all the better.
overly permissive transparent squid proxy
so far i think i've only found one such proxy that successfully denies http requests to unauthorized users. people just don't seem to understand that even if your proxy doesn't have an IP address i can still use it. a very simple example is just doing
`echo -en "GET http://www.google.com/ HTTP/1.1\nHost: www.google.com\n\n" | nc www.google.com 80`. if this succeeds, their proxy is allowing anyone to just go right through to the internets without authing. to use this in practice download ProxyTunnel and use Dag's SSH-over-HTTP method to open an ssh tunnel with SOCKS5 proxy, or hell, a ppp-over-ssh tunnel to get Hulu to work. you should try both port 80 and 443 with this method as sometimes they'll only allow one outbound through the proxy. also take note that though the default transparent proxy might be too restrictive, you should scan the default route and the rest of the network with nmap for more open proxy ports like 3128, 8080, etc (hint: AT&T's open proxy port is non-standard). for the most part some variation on this ssh config line will get you what you want:
ProxyCommand proxytunnel -p www.google.com:80 -r remotehost:public_http_port -d remotehost:internal_ssh_port -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)\n"
MAC address/IP address cloning
this is probably the easiest/most reliable method to get through a paywall. if someone else is already authed, just sniff the network, find their MAC and IP address, set it as your own, and start browsing. to be honest i don't ever use this method but it should work in theory. if they enforce WPA encryption it should make this method difficult to impossible, though i'm really not up to speed on all WPA attacks.
Wednesday, September 15, 2010
virgin mobile broadband2go in linux
Recently I wanted to get internet in my home for cheap. My friend recommended Virgin Mobile's mobile broadband since it is now only $40 a month, pre-paid (no contract) and unlimited 3g data. This is by far the best deal you can get for mobile internet access in this country. Every other service is both more expensive and has a tiny data cap, and usually requires a contract.
It's no wonder every wal-mart I visited in a 20 mile radius was sold out of the Mifi, a battery-powered wifi hotspot and 3g modem. At $150 it's not cheap at all, but the supposed ease of set-up and ability to share internet with up to 4 wifi devices makes the convenience worth the price. Since I couldn't find one I opted for the Novatell Wireless MC760 usb 3g modem. At $80 it's much more affordable, but much more annoying to get working.
Only windows and mac are supported by the mc760. Normally this just means finding some half-working Linux driver and getting no support, which is pretty standard in the Linux world. In this case it's much much worse: you have to use windows or mac drivers and software to perform some magical rituals in the firmware before it'll even connect to the service. So there's really no way to use it without at least setting it up in windows or mac.
I of course didn't want to go along with this, mostly because it would be annoying to pirate a copy of windows just to get some crappy modem working. I tried for a couple days to get something to budge without a real windows install. I even eventually installed a VM of windows xp to try to set it up the "normal way" in a VM inside linux, but that still didn't work. I did end up using a windows machine to activate it finally. I'm still not sure I couldn't do it all from Linux, though.
So this is what I found out about the device. You plug it in and it does this wonderful thing where it pretends to be a USB CDROM and auto-runs a windows driver installer. The only way to turn this off in Linux is to use usb_modeswitch to detect the USB device and perform some magic to switch it to a ttyUSB0 modem/serial device. As usual, Slackware did this for me automatically without me knowing it, so I actually didn't even have to set that up.
The next thing I found out was how to configure the device as a modem. After hours and hours of googling and testing I found the secret ppp configuration that allows the modem to be controlled in Linux:
/etc/ppp/peers/virgin
/etc/ppp/peers/chat-virgin-3
/etc/ppp/pap-secrets
All you really need to dial up the modem is "ATZ\nATDT*99\n" or similar. Some people use 777, but 99 works for me. The PAP username and password is "Internet". Now, using just these settings with a completely pristine modem you can actually connect to Sprint PCS' network. You get a 10.0.0.0/8 address and two Sprint PCS dns servers (68.28.146.92, 68.28.154.92) and one P-t-P gateway: 68.28.145.69 (though that may just be one of several gateways). The very trivial auditing I did showed DNS worked but ICMP, TCP and UDP were almost nonexistent. Their firewalls seem to be non-shitty, however, a DNS tunnel would probably still work.
What's REALLY fucked up is virgin won't give you the URL to sign up or activate your card. You need to install the software and click on it to take you to a public URL they could have told you before. (Thanks a lot for wasting a ton of my time, virgin mobile) So you go to this URL and sign up with the device - NOT using the device, mind you. You need the internet (and a computer with administrator rights) to do that. Oh, and their website sucks - I had to call tech support for them to tell me to clear my cache and restart my browser about 4 times in between parts of the sign-up process because their shitty webapp couldn't understand the concept of expiring or reloading a cookie or session id. If you can just try to set the whole thing up with their customer service at 1-888-322-1122.
You register your address and credit card with the device's ESN and get a login/pin code for their website. Then you log in and pick a plan and fill it up with money. The login and an additional code for activating the card (the MSID) are both new phone numbers local to your zip code. With the software installed on a computer and an account set up, you can begin to activate the device. You connect once and the software redirects you to a very simple, easily guessable URL based on the phone numbers above. This then tells you new numbers (which IIRC were the phone numbers above) to insert in fields in the connection software to complete the activation process. When you go to plug them in you notice the default values are zeros along with the last 4 of the ESN. I saw some links during my googling which makes me think some specific AT commands would allow you to register the device without using Windows or Mac. Somebody please sniff the usb connection and verify this for me.
Does this activation process require your PIN code? No. Does this require anything but two phone numbers related to an account with money in it? No. Would it be possible to spoof more than one device on their network with the same settings at the same time? Perhaps, but I bet they have a way to find dupes. (Keep in mind, this MC760 also contains a GPS receiver which i'm still trying to figure out how to tap into)
Anyway. After finally disconnecting and connecting again, the internets is go. Unplug from the crappy windoze/mac you've been forced to use to activate this thing, plug it into your linux box, wait a minute and then run `pppd call virgin'. You should be connected, given a public IP and the internet should just do it's thing. The funny thing? All the settings once you're connected are the same as when we connected before the activation. Only the IP address is changed. HMMMMM. I wonder if we could just spoof an IP address and use the internets without activating? Again, this thing has GPS built in so don't think you wouldn't be tracked down.
The speeds i'm getting vary from 6Kbps to 1.2Mbps down and 1Kbps to 30Kbps up. This is not completely out of the range of current 4G connections, as embarrassing as that is for 4G users. So far in about half a day the connection has cut out twice for several minutes at a time and the card is extremely hot. I would recommend the Mifi if you have the cash.
edit when the usb card gets hot, it gets HOT. this makes performance suffer. example:
It's no wonder every wal-mart I visited in a 20 mile radius was sold out of the Mifi, a battery-powered wifi hotspot and 3g modem. At $150 it's not cheap at all, but the supposed ease of set-up and ability to share internet with up to 4 wifi devices makes the convenience worth the price. Since I couldn't find one I opted for the Novatell Wireless MC760 usb 3g modem. At $80 it's much more affordable, but much more annoying to get working.
Only windows and mac are supported by the mc760. Normally this just means finding some half-working Linux driver and getting no support, which is pretty standard in the Linux world. In this case it's much much worse: you have to use windows or mac drivers and software to perform some magical rituals in the firmware before it'll even connect to the service. So there's really no way to use it without at least setting it up in windows or mac.
I of course didn't want to go along with this, mostly because it would be annoying to pirate a copy of windows just to get some crappy modem working. I tried for a couple days to get something to budge without a real windows install. I even eventually installed a VM of windows xp to try to set it up the "normal way" in a VM inside linux, but that still didn't work. I did end up using a windows machine to activate it finally. I'm still not sure I couldn't do it all from Linux, though.
So this is what I found out about the device. You plug it in and it does this wonderful thing where it pretends to be a USB CDROM and auto-runs a windows driver installer. The only way to turn this off in Linux is to use usb_modeswitch to detect the USB device and perform some magic to switch it to a ttyUSB0 modem/serial device. As usual, Slackware did this for me automatically without me knowing it, so I actually didn't even have to set that up.
The next thing I found out was how to configure the device as a modem. After hours and hours of googling and testing I found the secret ppp configuration that allows the modem to be controlled in Linux:
/etc/ppp/peers/virgin
460800
user Internet
password Internet
debug
connect '/usr/sbin/chat -f /etc/ppp/peers/chat-virgin-3'
crtscts
noipdefault
lock
modem
/dev/ttyUSB0
usepeerdns
defaultroute
connect-delay 5000
novj
/etc/ppp/peers/chat-virgin-3
TIMEOUT 10
ECHO ON
ABORT '\nBUSY\r'
ABORT '\nERROR\r'
ABORT '\nNO ANSWER\r'
ABORT '\nNO CARRIER\r'
ABORT '\nNO DIALTONE\r'
ABORT '\nRINGING\r\n\r\nRINGING\r'
"" "ATZ"
OK "AT&F"
TIMEOUT 60
SAY "CALLING ..."
OK "ATD*99***1#"
CONNECT c
/etc/ppp/pap-secrets
Internet * Internet *
All you really need to dial up the modem is "ATZ\nATDT*99\n" or similar. Some people use 777, but 99 works for me. The PAP username and password is "Internet". Now, using just these settings with a completely pristine modem you can actually connect to Sprint PCS' network. You get a 10.0.0.0/8 address and two Sprint PCS dns servers (68.28.146.92, 68.28.154.92) and one P-t-P gateway: 68.28.145.69 (though that may just be one of several gateways). The very trivial auditing I did showed DNS worked but ICMP, TCP and UDP were almost nonexistent. Their firewalls seem to be non-shitty, however, a DNS tunnel would probably still work.
What's REALLY fucked up is virgin won't give you the URL to sign up or activate your card. You need to install the software and click on it to take you to a public URL they could have told you before. (Thanks a lot for wasting a ton of my time, virgin mobile) So you go to this URL and sign up with the device - NOT using the device, mind you. You need the internet (and a computer with administrator rights) to do that. Oh, and their website sucks - I had to call tech support for them to tell me to clear my cache and restart my browser about 4 times in between parts of the sign-up process because their shitty webapp couldn't understand the concept of expiring or reloading a cookie or session id. If you can just try to set the whole thing up with their customer service at 1-888-322-1122.
You register your address and credit card with the device's ESN and get a login/pin code for their website. Then you log in and pick a plan and fill it up with money. The login and an additional code for activating the card (the MSID) are both new phone numbers local to your zip code. With the software installed on a computer and an account set up, you can begin to activate the device. You connect once and the software redirects you to a very simple, easily guessable URL based on the phone numbers above. This then tells you new numbers (which IIRC were the phone numbers above) to insert in fields in the connection software to complete the activation process. When you go to plug them in you notice the default values are zeros along with the last 4 of the ESN. I saw some links during my googling which makes me think some specific AT commands would allow you to register the device without using Windows or Mac. Somebody please sniff the usb connection and verify this for me.
Does this activation process require your PIN code? No. Does this require anything but two phone numbers related to an account with money in it? No. Would it be possible to spoof more than one device on their network with the same settings at the same time? Perhaps, but I bet they have a way to find dupes. (Keep in mind, this MC760 also contains a GPS receiver which i'm still trying to figure out how to tap into)
Anyway. After finally disconnecting and connecting again, the internets is go. Unplug from the crappy windoze/mac you've been forced to use to activate this thing, plug it into your linux box, wait a minute and then run `pppd call virgin'. You should be connected, given a public IP and the internet should just do it's thing. The funny thing? All the settings once you're connected are the same as when we connected before the activation. Only the IP address is changed. HMMMMM. I wonder if we could just spoof an IP address and use the internets without activating? Again, this thing has GPS built in so don't think you wouldn't be tracked down.
The speeds i'm getting vary from 6Kbps to 1.2Mbps down and 1Kbps to 30Kbps up. This is not completely out of the range of current 4G connections, as embarrassing as that is for 4G users. So far in about half a day the connection has cut out twice for several minutes at a time and the card is extremely hot. I would recommend the Mifi if you have the cash.
edit when the usb card gets hot, it gets HOT. this makes performance suffer. example:
Saturday, August 7, 2010
reason number 9039808204802 why i hate RPM
[root@dhcp9001 ~]# rpm -Uvh dhcp-3.1.9999-2.cbs.i386.rpm
########################################### [100%]
package dhcp-3.0.5-7.el5 (which is newer than dhcp-3.1.9999-2.cbs) is already installed
NO, IT IS NOT NEWER. FUCK YOU. INSTALLLLLLL!!!!!
Tuesday, July 27, 2010
quality control in your network
(disclaimer: i've never worked in quality control, but this is my view of it as someone who has had to work with QC)
while i sit here at 3 in the morning waiting for a server daemon to dutifully seg fault leaving me to continue debugging, i reflect on how quality control is lacking from so many networks both large and small. a single oversight *can* mean the difference between your company losing money or going under, so you must be aware of any potential problems at all times.
first of all, what is quality control and why do you need it? well, chances are if you do your job halfway right, you're already doing it. quality control is basically double-checking that the methods you use to do your job are correct. it doesn't verify that the final product is good; it's more like, tripwire in procedure form. it's making sure that things work the way you expect them to. you are already doing it when you verify what development libraries are installed. when you run unit tests against your software. when your change management system verifies a user is allowed to commit that particular piece of code, or restart that service. it's checking the tapes to make sure the backup robot is functioning. it's verifying configs are written properly on the router and the updated ones are regularly saved in version control.
typically you don't need much quality control in the average network. some product development may require strict control and observation of policies and procedures, which is usually only reinforced due to the risk of random audits or inspections. depending on your environment you may be required to do very little or no quality control at all. but i'd like to tell you about the quality control you should be doing.
the quality control people aren't usually technical people. a lot of the time they'll work with a team member of whatever they're checking out, ask questions and make notes. the first big formal procedures don't include everything. usually details get hashed out while the QC engineer talks to someone (a dev in this case) about what they do and how to check that what they did worked.
the basic principles you should keep in mind when applying quality control to your network are as follows:
1. Keep It Simple, Stupid. it doesn't have to be verbose or complex. be flexible. be easy.
2. it should be possible for someone to check the work of the quality control engineer(s).
3. you don't want to define how everything works; only how to tell if it's working as expected.
4. your goal is to make sure there are no catastrophic failures. you don't have to account for every blip along the road as long as the road is open.
5. start with the big things and move down once all the big things are covered. close off those single points of failure and move on to the other pressing issues.
hopefully this post will help to give you an idea of how you can apply quality control to your network to get an improvement in the overall quality of service you provide. half of this is just making sure things works right, and then the other half is reviewing that there are records that it's been used properly in the past. here's some stuff you can do.
developers
double-check that your software is being created correctly. check that the libraries on your development boxes match up with what's going into QA or production. use unit tests on your code. make sure everything goes into version control *before* it ever hits QA or production, and make sure you know who made what change and why. make sure the method of deployment can be reversed at any time. make sure you follow change management procedures when necessary.
sysadmins
double-check that you've confirmed with everyone before you push a new piece of software to QA or production, and that you can roll it back when necessary. so check that your change management is working. it's good to have a list of the major and sub-major software that different development teams rely on (usually libraries) and get a change-management approval before ever pushing this stuff out. do it early so devs have time to test their shit with the new software. make sure your backups work correctly. you should be able to confirm logs and destination files to ensure the backups are going well regularly. if those or any other automated process fails it should generate an alert, and you should be able to verify those alerts are going out as expected (did /var fill up and is sendmail unable to work now?). make sure all security patches are applied in a timely manner. make sure all service-monitoring systems are working, and that failover of critial systems is in place and works as expected. make a list of all critical infrastructure and make sure all of it has hot-spare failover systems waiting in the wings. provide for methods to remote troubleshoot in the event of total internet or system collapse. make sure any network gear you depend on also has hot failovers that work.
there's more implementation-specific details you sometimes need to get into with QC. i want to get more into how to begin making procedures for these systems but it's way past my bedtime. will continue when i am not so sleepy.
while i sit here at 3 in the morning waiting for a server daemon to dutifully seg fault leaving me to continue debugging, i reflect on how quality control is lacking from so many networks both large and small. a single oversight *can* mean the difference between your company losing money or going under, so you must be aware of any potential problems at all times.
first of all, what is quality control and why do you need it? well, chances are if you do your job halfway right, you're already doing it. quality control is basically double-checking that the methods you use to do your job are correct. it doesn't verify that the final product is good; it's more like, tripwire in procedure form. it's making sure that things work the way you expect them to. you are already doing it when you verify what development libraries are installed. when you run unit tests against your software. when your change management system verifies a user is allowed to commit that particular piece of code, or restart that service. it's checking the tapes to make sure the backup robot is functioning. it's verifying configs are written properly on the router and the updated ones are regularly saved in version control.
typically you don't need much quality control in the average network. some product development may require strict control and observation of policies and procedures, which is usually only reinforced due to the risk of random audits or inspections. depending on your environment you may be required to do very little or no quality control at all. but i'd like to tell you about the quality control you should be doing.
the quality control people aren't usually technical people. a lot of the time they'll work with a team member of whatever they're checking out, ask questions and make notes. the first big formal procedures don't include everything. usually details get hashed out while the QC engineer talks to someone (a dev in this case) about what they do and how to check that what they did worked.
the basic principles you should keep in mind when applying quality control to your network are as follows:
1. Keep It Simple, Stupid. it doesn't have to be verbose or complex. be flexible. be easy.
2. it should be possible for someone to check the work of the quality control engineer(s).
3. you don't want to define how everything works; only how to tell if it's working as expected.
4. your goal is to make sure there are no catastrophic failures. you don't have to account for every blip along the road as long as the road is open.
5. start with the big things and move down once all the big things are covered. close off those single points of failure and move on to the other pressing issues.
hopefully this post will help to give you an idea of how you can apply quality control to your network to get an improvement in the overall quality of service you provide. half of this is just making sure things works right, and then the other half is reviewing that there are records that it's been used properly in the past. here's some stuff you can do.
developers
double-check that your software is being created correctly. check that the libraries on your development boxes match up with what's going into QA or production. use unit tests on your code. make sure everything goes into version control *before* it ever hits QA or production, and make sure you know who made what change and why. make sure the method of deployment can be reversed at any time. make sure you follow change management procedures when necessary.
sysadmins
double-check that you've confirmed with everyone before you push a new piece of software to QA or production, and that you can roll it back when necessary. so check that your change management is working. it's good to have a list of the major and sub-major software that different development teams rely on (usually libraries) and get a change-management approval before ever pushing this stuff out. do it early so devs have time to test their shit with the new software. make sure your backups work correctly. you should be able to confirm logs and destination files to ensure the backups are going well regularly. if those or any other automated process fails it should generate an alert, and you should be able to verify those alerts are going out as expected (did /var fill up and is sendmail unable to work now?). make sure all security patches are applied in a timely manner. make sure all service-monitoring systems are working, and that failover of critial systems is in place and works as expected. make a list of all critical infrastructure and make sure all of it has hot-spare failover systems waiting in the wings. provide for methods to remote troubleshoot in the event of total internet or system collapse. make sure any network gear you depend on also has hot failovers that work.
there's more implementation-specific details you sometimes need to get into with QC. i want to get more into how to begin making procedures for these systems but it's way past my bedtime. will continue when i am not so sleepy.
Tuesday, June 29, 2010
you poor apple users
i really feel sorry for people who recently got the iphone 4. i sat at work, listening to my co-worker half-heartedly explain how his iphone sometimes gets really bad reception, but that it's ok. that he naturally holds his phone in exactly the way necessary to avoid the signal loss. how it's not alright that it has this problem, but it's also ok because it doesn't affect him for the most part.
i just had this twang of empathy. like, i see now... you really don't have a choice. what are you going to do, return the phone and get a non-iphone 4? it's not an option for him. in one sense because it's now ingrained into his life; the apps, the services, the way the phone feels, the way he uses it.... that's part of him now. he can't let it go. it's scary to think of not having an iphone. and that's the second sad thing. it's actually changed his manner of thinking and now he can't get away from the thing. it's like an addiction. he can't see himself without it, and now he's trapped by it.
this is a guy who less than a year ago had never owned an iphone. he's technically a "late adopter." yet in less than a year apple has not only converted him, they've made him their slave. in some ways i can relate... since getting a smart phone i feel like i need to have a more powerful one. i need to be able to stream video, or use irc or ssh remotely, or browse web pages fast and in full render. do i actually need these things? fuck no. i was perfectly fine with my old brick sony ericsson with the wonderful camera, sending pictures and using web pages just fine. now i sit in at&t stores trying to sell myself on buying the newest piece of shit android phone which still doesn't have as good a camera (and definitely no xenon flash) like my old phone.
so i get it now. i'll stop pointing fingers. i'll stop trying to convince you. because i know you can't escape it. you're trapped in a tar pit of technology and you can't get out even if you wanted to. i feel sorry that you're stuck with a (somewhat) shitty phone with a shitty app development model on a really shitty carrier. i wish we could all just have open-brand open-carrier open-market smartphones and share information and use the internet as freely on our mobile devices as we can on our computers.
but we can't. we probably won't ever, or not for a long time anyway. they figured out the way to trap us and steal more of our money than they ever did with the PC or laptop. it's overly-expensive unlocked phones and contracts and insurances and data plans and messaging plans. i pay $110 a month for a "standard" phone plan with "unlimited" data and "unlimited" messaging. that's $2,640 dollars every 2 years (without the extra amount i pay for a new phone every 1.5 years). i don't even buy a laptop or PC that often, or gaming machines or games. i just paid $520 for an almost-new laptop, the first big purchase in over 5 years.
this is kind of depressing me. i feel like i'm trapped too.
i just had this twang of empathy. like, i see now... you really don't have a choice. what are you going to do, return the phone and get a non-iphone 4? it's not an option for him. in one sense because it's now ingrained into his life; the apps, the services, the way the phone feels, the way he uses it.... that's part of him now. he can't let it go. it's scary to think of not having an iphone. and that's the second sad thing. it's actually changed his manner of thinking and now he can't get away from the thing. it's like an addiction. he can't see himself without it, and now he's trapped by it.
this is a guy who less than a year ago had never owned an iphone. he's technically a "late adopter." yet in less than a year apple has not only converted him, they've made him their slave. in some ways i can relate... since getting a smart phone i feel like i need to have a more powerful one. i need to be able to stream video, or use irc or ssh remotely, or browse web pages fast and in full render. do i actually need these things? fuck no. i was perfectly fine with my old brick sony ericsson with the wonderful camera, sending pictures and using web pages just fine. now i sit in at&t stores trying to sell myself on buying the newest piece of shit android phone which still doesn't have as good a camera (and definitely no xenon flash) like my old phone.
so i get it now. i'll stop pointing fingers. i'll stop trying to convince you. because i know you can't escape it. you're trapped in a tar pit of technology and you can't get out even if you wanted to. i feel sorry that you're stuck with a (somewhat) shitty phone with a shitty app development model on a really shitty carrier. i wish we could all just have open-brand open-carrier open-market smartphones and share information and use the internet as freely on our mobile devices as we can on our computers.
but we can't. we probably won't ever, or not for a long time anyway. they figured out the way to trap us and steal more of our money than they ever did with the PC or laptop. it's overly-expensive unlocked phones and contracts and insurances and data plans and messaging plans. i pay $110 a month for a "standard" phone plan with "unlimited" data and "unlimited" messaging. that's $2,640 dollars every 2 years (without the extra amount i pay for a new phone every 1.5 years). i don't even buy a laptop or PC that often, or gaming machines or games. i just paid $520 for an almost-new laptop, the first big purchase in over 5 years.
this is kind of depressing me. i feel like i'm trapped too.
Monday, June 21, 2010
what's wrong with my breakfast?
a recent survey found that 85% of a selection of engineers don't use twitter. they cited not caring what people had for breakfast as a reason they don't use it. this is my response.
what's so wrong with my breakfast that you can't stomach the information? i realize it's useless. i realize you don't care. but also realize, i don't care that you don't care. i am one of the mindless drones of twitter and [formerly] brightkite and facebook and myspace that update our status with whatever mindless drivel we happen to think is important in the moment at that time.
there's not much logic involved. wanna say something? say it. people will listen or they won't. but i don't expect them to. it's more of a general smattering of my thoughts and a few choice life experiences that people can refer to if they wish in the future. it can be a way for a potential employer to see if i might be a fit for their organization. it might be a way for a single young lady (or lad?) to determine if i'm worth sending a poke/message/direct tweet/whatever. perhaps i just want people to know i know certain things, or have certain opinions. whatever it is it's certainly one part exhibitionism, one part honest sharing of experiences and thoughts.
it would be nice if it had more function. perhaps a symbol prefix to be flagged in a number of ways: exclamation mark for an urgent or important message, question mark to "crowdsource" (ugh), pound sign to advertise an event, dollar sign to advertise a neat deal or other ad. extra modifiers could be postfixed to give further detail about the post. then each user could set their preferences of what kind of information comes into their stream from their friends.
that doesn't exist, though. what does exist is "i'm at place XYZ and i'm having a blast!" or, "this is a picture of a fucking AWESOME fish taco" or, "who else thought iron man 2 was kind of lame?" my posts aren't intended for the general public. they're not always even meant to be useful. i send this shit out because i'm bored and i want to share with my friends. they don't even have to interact with me. in some small way i am enriching lives with my pointless drivel. i help kill boredom. sometimes i share articles which i find useful or informative. sometimes i share music that means something to me.
you probably won't find it useful at first. you'll probably be too bored with it to even start. but once you have a good chunk of friends added into your account, you'll notice you can interact with them. you can follow what some of them are doing. maybe even find out something you wouldn't have without the service. yes, we should all have real lives and not be so connected to a text interface all the time. but sometimes technology can be more than just a distraction.
granted: most of twitter is pointless, and the examples i give which contain somewhat useful information is by and large not close to the majority of the tweets out there. twitter kinda sucks. but facebook is a good example of what it could be. the old brightkite was a good example of what it could be. maybe in the future all these techie news aggregating sites can turn into an honest-to-goodness social network of just nerds so we can all collaborate interactively and enrich each other's work and personal lives on common ground, where we can hack on it and do what we want with our own little home on the internets. but that's probably just a pipe dream.
i don't care if you use it because i don't watch my twitter feed tbh. but you can get a PixelPipe account, add your friends that are on the various social networks, and share with them all at once. you don't even have to follow what they're saying. just say something. try to make it useful; who knows, maybe you'll start a trend.
what's so wrong with my breakfast that you can't stomach the information? i realize it's useless. i realize you don't care. but also realize, i don't care that you don't care. i am one of the mindless drones of twitter and [formerly] brightkite and facebook and myspace that update our status with whatever mindless drivel we happen to think is important in the moment at that time.
there's not much logic involved. wanna say something? say it. people will listen or they won't. but i don't expect them to. it's more of a general smattering of my thoughts and a few choice life experiences that people can refer to if they wish in the future. it can be a way for a potential employer to see if i might be a fit for their organization. it might be a way for a single young lady (or lad?) to determine if i'm worth sending a poke/message/direct tweet/whatever. perhaps i just want people to know i know certain things, or have certain opinions. whatever it is it's certainly one part exhibitionism, one part honest sharing of experiences and thoughts.
it would be nice if it had more function. perhaps a symbol prefix to be flagged in a number of ways: exclamation mark for an urgent or important message, question mark to "crowdsource" (ugh), pound sign to advertise an event, dollar sign to advertise a neat deal or other ad. extra modifiers could be postfixed to give further detail about the post. then each user could set their preferences of what kind of information comes into their stream from their friends.
that doesn't exist, though. what does exist is "i'm at place XYZ and i'm having a blast!" or, "this is a picture of a fucking AWESOME fish taco" or, "who else thought iron man 2 was kind of lame?" my posts aren't intended for the general public. they're not always even meant to be useful. i send this shit out because i'm bored and i want to share with my friends. they don't even have to interact with me. in some small way i am enriching lives with my pointless drivel. i help kill boredom. sometimes i share articles which i find useful or informative. sometimes i share music that means something to me.
you probably won't find it useful at first. you'll probably be too bored with it to even start. but once you have a good chunk of friends added into your account, you'll notice you can interact with them. you can follow what some of them are doing. maybe even find out something you wouldn't have without the service. yes, we should all have real lives and not be so connected to a text interface all the time. but sometimes technology can be more than just a distraction.
granted: most of twitter is pointless, and the examples i give which contain somewhat useful information is by and large not close to the majority of the tweets out there. twitter kinda sucks. but facebook is a good example of what it could be. the old brightkite was a good example of what it could be. maybe in the future all these techie news aggregating sites can turn into an honest-to-goodness social network of just nerds so we can all collaborate interactively and enrich each other's work and personal lives on common ground, where we can hack on it and do what we want with our own little home on the internets. but that's probably just a pipe dream.
i don't care if you use it because i don't watch my twitter feed tbh. but you can get a PixelPipe account, add your friends that are on the various social networks, and share with them all at once. you don't even have to follow what they're saying. just say something. try to make it useful; who knows, maybe you'll start a trend.
Wednesday, May 19, 2010
linode: a paradigm of indifference
my linode was unavailable for 2 hours yesterday from 11:50am to 1:47pm. the linode itself stayed up but its connection to the internet was down. the newark datacenter it's hosted in went down and though i could apparently console it, i could not connect to it outside the control panel linode has for its users to admin their node. obviously they had some kind of connection established between their different datacenters if i could console it.
so i inquired if i could migrate the node to another datacenter.... not while the internet was down, apparently. and i couldn't do it without submitting a ticket and waiting for a response. to me this is aggravating. why is it open-source and commercial VPS solutions can all automatically migrate nodes from one dom0 to another but linode cannot do it automatically? a question that will probably never be answered.
what troubled me more than the downtime and lack of actual information or an ETA was the response from IRC support channels. lots of people were joining the channel, asking if there was a problem, what the cause was, and when it would be resolved. for a service we all pay a minimum of about $24 a month, probably the highest amount you can pay for a VPS with the same specs, this should be a completely acceptable set of queries. some people pay a lot more than that, btw. you'd think they would want to be courteous and attentive to all their questions, right? or, i dunno, put in the fucking topic of the channel the current status?
they didn't update the topic. most of the people coming in were treated with sarcasm, and no effort was made to silence those who were in the channel merely to be a dick - which is common on irc. but not a good idea for an official support channel for a paid service. most people who were there with concern that their services were unavailable were treated with a general indifference and in many cases as if they were simply pests and whiners. i guess most customers figured a 2 hour downtime in the middle of the day was something worth complaining about. i guess we were wrong.
there is still no explanation for the issue on the linode status page. at first we were told in the channel it was an issue with level 3, and the london datacenter was also down. then it was only the newark datacenter. through continuously attempting console access and traceroute/tracepath i saw how the connection would drop right at the router before my linode is usually hit. eventually the whole newark datacenter's routes stopped, but soon came back; sometime during this time console access was terminated, so clearly there was some real traffic going to linode inside the DC throughout.
because the datacenter's routers were inaccessible via traceroute for a short time i assume they were somehow convinced there may indeed be a problem with their routers, and so i'm not ready to completely blame linode. but certainly they had some connectivity and could have offered to migrate our nodes to another DC so, for example, we may have only had a 1-hour downtime instead of 2 hours. no such attempt was made. the estimated refund for this downtime is something like $0.08, which is of course nothing compared to the amount lost in business to the linode users who suffered the downtime (SLA's never reimburse near the amount you lose, so you shouldn't count on them that much). not that it would have made a difference, but the heartless way we as customers were dealt with makes me really dislike this company. now i know if i have a problem in the future, nobody's going to really try to help me. apparently they don't need me as a customer. and that's OK; i can find cheaper hosts with bigger caps elsewhere.
in the end this will be helpful. i already had a secondary VPS i paid about $5 for monthly, whose billing i let lapse out of laziness. this event will help motivate me to move back to that host and a couple others and have truly redundant services for the same cost as the one node i'd been paying for at linode. sure their web interface is fancy and you have a good deal of freedom. but considering the availability and bad customer service? i think i'll go with the cheap guys.
if you want a cheap VPS, check out Special VPS and Low End Box. they review and give promo codes for low-end VPS providers. by reading their reviews you can learn how to spot shady and unreliable hosters. do your research!
so i inquired if i could migrate the node to another datacenter.... not while the internet was down, apparently. and i couldn't do it without submitting a ticket and waiting for a response. to me this is aggravating. why is it open-source and commercial VPS solutions can all automatically migrate nodes from one dom0 to another but linode cannot do it automatically? a question that will probably never be answered.
what troubled me more than the downtime and lack of actual information or an ETA was the response from IRC support channels. lots of people were joining the channel, asking if there was a problem, what the cause was, and when it would be resolved. for a service we all pay a minimum of about $24 a month, probably the highest amount you can pay for a VPS with the same specs, this should be a completely acceptable set of queries. some people pay a lot more than that, btw. you'd think they would want to be courteous and attentive to all their questions, right? or, i dunno, put in the fucking topic of the channel the current status?
they didn't update the topic. most of the people coming in were treated with sarcasm, and no effort was made to silence those who were in the channel merely to be a dick - which is common on irc. but not a good idea for an official support channel for a paid service. most people who were there with concern that their services were unavailable were treated with a general indifference and in many cases as if they were simply pests and whiners. i guess most customers figured a 2 hour downtime in the middle of the day was something worth complaining about. i guess we were wrong.
there is still no explanation for the issue on the linode status page. at first we were told in the channel it was an issue with level 3, and the london datacenter was also down. then it was only the newark datacenter. through continuously attempting console access and traceroute/tracepath i saw how the connection would drop right at the router before my linode is usually hit. eventually the whole newark datacenter's routes stopped, but soon came back; sometime during this time console access was terminated, so clearly there was some real traffic going to linode inside the DC throughout.
because the datacenter's routers were inaccessible via traceroute for a short time i assume they were somehow convinced there may indeed be a problem with their routers, and so i'm not ready to completely blame linode. but certainly they had some connectivity and could have offered to migrate our nodes to another DC so, for example, we may have only had a 1-hour downtime instead of 2 hours. no such attempt was made. the estimated refund for this downtime is something like $0.08, which is of course nothing compared to the amount lost in business to the linode users who suffered the downtime (SLA's never reimburse near the amount you lose, so you shouldn't count on them that much). not that it would have made a difference, but the heartless way we as customers were dealt with makes me really dislike this company. now i know if i have a problem in the future, nobody's going to really try to help me. apparently they don't need me as a customer. and that's OK; i can find cheaper hosts with bigger caps elsewhere.
in the end this will be helpful. i already had a secondary VPS i paid about $5 for monthly, whose billing i let lapse out of laziness. this event will help motivate me to move back to that host and a couple others and have truly redundant services for the same cost as the one node i'd been paying for at linode. sure their web interface is fancy and you have a good deal of freedom. but considering the availability and bad customer service? i think i'll go with the cheap guys.
if you want a cheap VPS, check out Special VPS and Low End Box. they review and give promo codes for low-end VPS providers. by reading their reviews you can learn how to spot shady and unreliable hosters. do your research!
Thursday, April 8, 2010
how to make a product everyone will buy
- Make a hardware platform only you control and manufacture. Also make sure it looks very pretty and is reliable.
- Make an operating system that's user-friendly, simple, and very pretty with some killer apps and an easy dev environment. But make sure it only works on your hardware.
- Make software to provide all the personal needs one has with a computer and make it all tie together seamlessly. Make sure it's user-friendly, simple, and very pretty.
- Make accessory products which provide for things people want day to day, make it work with your hardware and software, and tie it all together seamlessly. Also, make sure it's very pretty and reliable.
Now release anything and make sure it ties together with all the previous products seamlessly, is user-friendly, simple, reliable, and very pretty. It doesn't even matter if it has a purpose or is redundant: people will buy it. It helps if you have the world's greatest PR/hype machine and if you can make people believe they're superior to someone else by owning these products. Above all make sure it is always very pretty. In this vein the product is like a luxury car: completely impractical and unnecessary, but people pay a premium for something that looks fancy and probably doesn't provide any benefit over a cheaper less pretty device.
Monday, March 15, 2010
Better Security [tips]
You've got network intrusion detection and stateful firewalls. Your kernels are patched as far as they can go for exploit prevention. You're using OpenSSH. That's awesome. Now why is it someone can still penetrate your precious servers so easily?
When you begin to secure something (anything, really - buildings, documents, servers) you have to consider everything. Each factor which could possibly be targeted in an attack could be used with any other factor to increase the likelihood of a successful compromise. So each factor has to be looked at in conjunction with every other factor. Yes, this is usually incredibly tedious and mind-bogglingly complex. To help mitigate this you can design preventative measures around each possible attack vector. In other words, add security to everything.
In the example above there's loads of attack vectors just waiting to be leveraged. One example is OpenSSH. A lot of people just use it in it's default form and never add any security to it. This will lead to an exploit. If you allow password entry to an OpenSSH server, just assume it's been compromised. It's so easy to observe a password being typed or intercept it somewhere else it's laughable. Not to mention people hiding passwords under their keyboards or on their monitors! No, a password-protected SSH key is the minimum you should use to allow access to a server. The "something you know, something you have" two factor authentication is far more secure than a single factor. I should stress that this is only true when properly implemented, as bad two-factor can be even less secure than strong one-factor. For more on authentication factors read this and take note of the ways different factors can be exploited (don't rely on just biometrics!).
In newer versions of OpenSSH there are even more methods to harden the authentication process, such as certificate authorities and key revocation lists. Also disabling root logins, having a set list of users allowed to authenticate, disabling old deprecated protocols, ciphers and algorithms, and explicitly dropping any connection with conflicting host keys is a good idea. You should even consider the libraries used by the application - were they built with buffer overflow protection? Is PAM enabled? One need only look around to see the underlying systems of your very critical remote administration software could be rife with potential exploits. For every one exploit known there is probably ten unknown waiting to be found.
Now think about what you have access to on your own system. Consider for a moment what would happen if an attacker used the same methods you do to gain access. Would it be difficult or easy for them? If it's easy for you to access the system, it may be for them. Try to make it more difficult even for yourself to gain access and a potential attacker will have a hell of a time trying to leverage something you've left unguarded. Make your firewalls incredibly verbose and restrictive; you'd be amazed how little can be done to a system when an attacker doesn't know exactly how to use it. Require multiple levels of logins before root can be obtained, and try to minimize any need to get to the root account.
Make all of your services run as unprivileged users. Make scripts to be executed by sudo which don't take any options (and clean their environments) that take care of tasks you may need root to perform. Any admin should be able to perform basic administrative tasks as a non-root user. Make all services controllable by an "admin" group, with each service having its own unique user to minimize attacks from one service to the next. Most services can be configured to start up and bind to a privileged port and drop to an unprivileged user, but for those that cannot there are methods (SELinux, etc) to work around restrictions in an application or system.
Make your configuration also be applied by a non-root user. A good way to think of configuration management is "let the service manage itself." Create your base model or template of configuration management scripts and then create service-specific configuration that can be run by the user of that service. In this way you don't need to worry about an attacker pilfering your configuration management and applying rules to all the machines as root. You can also create more fine-grained controls in terms of what admin (or group) can configure what service. You don't need to worry about a "trusted" user compromising your whole network if you only explicitly grant them access to the things they need to manage.
In fact, consider time-based access control for your entire network. You should expire SSH keys and user access for different services around the same time you expire old passwords. This will force you to improve the method users have to request access and hopefully increase productivity and responsiveness in this area of support. Just don't fall into the trap of allowing anything anyone asks for. Make it easy to get their manager to sign off on a request so at least there's some accountability; you can only benefit in terms of security if somebody thinks they might get fired for granting access willy-nilly.
When you begin to secure something (anything, really - buildings, documents, servers) you have to consider everything. Each factor which could possibly be targeted in an attack could be used with any other factor to increase the likelihood of a successful compromise. So each factor has to be looked at in conjunction with every other factor. Yes, this is usually incredibly tedious and mind-bogglingly complex. To help mitigate this you can design preventative measures around each possible attack vector. In other words, add security to everything.
In the example above there's loads of attack vectors just waiting to be leveraged. One example is OpenSSH. A lot of people just use it in it's default form and never add any security to it. This will lead to an exploit. If you allow password entry to an OpenSSH server, just assume it's been compromised. It's so easy to observe a password being typed or intercept it somewhere else it's laughable. Not to mention people hiding passwords under their keyboards or on their monitors! No, a password-protected SSH key is the minimum you should use to allow access to a server. The "something you know, something you have" two factor authentication is far more secure than a single factor. I should stress that this is only true when properly implemented, as bad two-factor can be even less secure than strong one-factor. For more on authentication factors read this and take note of the ways different factors can be exploited (don't rely on just biometrics!).
In newer versions of OpenSSH there are even more methods to harden the authentication process, such as certificate authorities and key revocation lists. Also disabling root logins, having a set list of users allowed to authenticate, disabling old deprecated protocols, ciphers and algorithms, and explicitly dropping any connection with conflicting host keys is a good idea. You should even consider the libraries used by the application - were they built with buffer overflow protection? Is PAM enabled? One need only look around to see the underlying systems of your very critical remote administration software could be rife with potential exploits. For every one exploit known there is probably ten unknown waiting to be found.
Now think about what you have access to on your own system. Consider for a moment what would happen if an attacker used the same methods you do to gain access. Would it be difficult or easy for them? If it's easy for you to access the system, it may be for them. Try to make it more difficult even for yourself to gain access and a potential attacker will have a hell of a time trying to leverage something you've left unguarded. Make your firewalls incredibly verbose and restrictive; you'd be amazed how little can be done to a system when an attacker doesn't know exactly how to use it. Require multiple levels of logins before root can be obtained, and try to minimize any need to get to the root account.
Make all of your services run as unprivileged users. Make scripts to be executed by sudo which don't take any options (and clean their environments) that take care of tasks you may need root to perform. Any admin should be able to perform basic administrative tasks as a non-root user. Make all services controllable by an "admin" group, with each service having its own unique user to minimize attacks from one service to the next. Most services can be configured to start up and bind to a privileged port and drop to an unprivileged user, but for those that cannot there are methods (SELinux, etc) to work around restrictions in an application or system.
Make your configuration also be applied by a non-root user. A good way to think of configuration management is "let the service manage itself." Create your base model or template of configuration management scripts and then create service-specific configuration that can be run by the user of that service. In this way you don't need to worry about an attacker pilfering your configuration management and applying rules to all the machines as root. You can also create more fine-grained controls in terms of what admin (or group) can configure what service. You don't need to worry about a "trusted" user compromising your whole network if you only explicitly grant them access to the things they need to manage.
In fact, consider time-based access control for your entire network. You should expire SSH keys and user access for different services around the same time you expire old passwords. This will force you to improve the method users have to request access and hopefully increase productivity and responsiveness in this area of support. Just don't fall into the trap of allowing anything anyone asks for. Make it easy to get their manager to sign off on a request so at least there's some accountability; you can only benefit in terms of security if somebody thinks they might get fired for granting access willy-nilly.
Thursday, February 25, 2010
using facebook to live-phish people indirectly
(05:38:23) Friend On Facebook: hey
(05:38:24) Friend On Facebook: hey
(05:38:25) Friend On Facebook: are you there
(06:43:28) Friend On Facebook: hry
(06:43:36) Friend On Facebook: how are you
(09:02:00) Me: morning
(09:02:54) Friend On Facebook: How are you
(09:03:06) Me: i'm good thanks, you?
(09:03:22) Friend On Facebook: I'm in a mess
(09:05:35) Me: in a mess?
(09:05:51) Friend On Facebook: yeah
(09:06:54) Friend On Facebook: i'm stranded in London,England and need help flying back home
(09:07:03) Friend On Facebook: Got mugged at a Gun point last night
(09:07:18) Friend On Facebook: all cash,credit card and cell phone were stolen
(09:09:50) Me: holy crap
(09:10:18) Friend On Facebook: thank God i still have my life and passport
(09:10:30) Me: that sux
(09:11:07) Friend On Facebook: Return flight leaves in few hours but having troubles sorting out the hotel bills
(09:12:29) Friend On Facebook: Need you to loan me some few $$ to pay off the hotel bills and also get a cab to the airport
(09:12:42) Friend On Facebook: I promise to refund it back tomorrow
(09:15:21) Friend On Facebook: are you there
(09:21:06) Friend On Facebook: are you still there
(09:25:34) Me: nice try ;)
(09:26:03) Friend On Facebook: ok
I confirmed via text message that this wasn't sent by the actual friend and they had no idea it was going on. In fact, someone the friend knows already fell victim to this phisher because they believed they were talking to the real person and had already sent money. (I wouldn't have sent him any money anyway because I don't know him THAT well, but still scary)
Lesson learned: Do not trust the internet.
Friday, February 12, 2010
hacking the samsung PN-58B860 firmware
Here's a brief example of reverse-engineering and a bad implementation of encryption.
My friend bought a 58" Plasma TV. He mentioned something about browsing youtube on it. This brief convo led my curiosity to their firmware download page. After downloading the self-extracting windows executable (yay for ZIP file compatibility!) and unzipping it I found a couple files.
`file` tells us that MicomCtrl, crc, and ddcmp are ELF 32-bit LSB ARM executables. I ignore these because they probably don't serve a major function and since they are plain-old unencrypted files and can be reverse-engineered with a debugger and standard development tools without much trouble.
We can see that there's obviously a shell script and two 'img' files, which are probably filesystem images, all encrypted. The question then becomes, how are they encrypted and how can we decrypt them? I start by opening up the files. The shell script appears to have a normal script-style structure, with multiple lines (sometimes repeating exactly) separated by newlines. Since it has a 'normal'-looking structure I can already guess whatever the encryption method is it isn't very good. Good encryption should give you no idea of what the data is or its form, and should have no apparent patterns in it.
When I open up one of the image files it seems pretty much like random garbage, as is expected. I don't expect to find much in them but i run them through the unix `strings` command anyway. All of a sudden, lots of series of the same ASCII characters tumble out:
"CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7"
The first thing I think is: Holy Shit. I know immediately what i'm looking at. Since my early days experimenting with XOR encryption I learned two simple principles: one, never repeat the key you're encrypting with, and two, never allow NULL characters in your data! The second applies because XORing anything against NULL (all 0 bits) just outputs the encryption key. As most people familiar with binary files know, they are absolutely rife with null characters, often in series and in large blocks. What I was looking at above was a huge block of the encryption key repeating over and over.
But could it really be that easy? I wrote a quick Perl script to test it.
And we run it on the script to test the theory:
As you can see, the script decoded beautifully with this key on the first try. The other two encrypted files also decode fine. It turns out "exe.img.enc" is a FAT filesystem image with an x86 boot sector (obviously for 'dd'ing to some storage device on the TV). The "appdata.img.enc" file is a Squashfs filesystem.
It took about 20 minutes for me to download and decode this supposedly-encrypted firmware image. This is the lesson: use a real tool for encryption. Do not think you know how to do it yourself. And don't waste your time trying to obfuscate a filesystem image from me; i'll just crack open the TV and dump the flash ROM.
For extra fun: note the name of the directory the executable was originally extracted into. :-)
My friend bought a 58" Plasma TV. He mentioned something about browsing youtube on it. This brief convo led my curiosity to their firmware download page. After downloading the self-extracting windows executable (yay for ZIP file compatibility!) and unzipping it I found a couple files.
pwillis@bobdobbs ~/Downloads/bar/T-CHE7AUSC/ :( ls -l
total 80
-rwxr-xr-x 1 pwillis pwillis 11695 2009-09-14 20:18 MicomCtrl*
-rwxr-xr-x 1 pwillis pwillis 19431 2009-09-14 20:18 crc*
-rwxr-xr-x 1 pwillis pwillis 21057 2009-09-14 20:18 ddcmp*
drwxr-xr-x 2 pwillis pwillis 4096 2010-02-12 14:09 image/
-rw-r--r-- 1 pwillis pwillis 7738 2009-09-14 20:18 run.sh.enc
pwillis@bobdobbs ~/Downloads/bar/T-CHE7AUSC/ :) ls -l image/
total 162080
-rw-r--r-- 1 pwillis pwillis 2048 2010-02-12 13:28 appdata-sample
-rw-r--r-- 1 pwillis pwillis 35663880 2010-02-12 13:54 appdata.fuck
-rw-r--r-- 1 pwillis pwillis 35663872 2009-09-14 20:18 appdata.img.enc
-rwxr-xr-x 1 pwillis pwillis 1573 2010-02-12 13:49 decrypt.pl*
-rwxr-xr-x 1 pwillis pwillis 1573 2010-02-12 13:41 decrypt.pl~*
-rw-r--r-- 1 pwillis pwillis 47304710 2010-02-12 13:50 exe.fuck
-rw-r--r-- 1 pwillis pwillis 47304704 2009-09-14 20:18 exe.img.enc
-rw-r--r-- 1 pwillis pwillis 18 2009-09-14 20:18 info.txt
-rw-r--r-- 1 pwillis pwillis 47 2009-09-14 20:18 validinfo.txt
-rw-r--r-- 1 pwillis pwillis 44 2009-09-14 20:18 version_info.txt
`file` tells us that MicomCtrl, crc, and ddcmp are ELF 32-bit LSB ARM executables. I ignore these because they probably don't serve a major function and since they are plain-old unencrypted files and can be reverse-engineered with a debugger and standard development tools without much trouble.
We can see that there's obviously a shell script and two 'img' files, which are probably filesystem images, all encrypted. The question then becomes, how are they encrypted and how can we decrypt them? I start by opening up the files. The shell script appears to have a normal script-style structure, with multiple lines (sometimes repeating exactly) separated by newlines. Since it has a 'normal'-looking structure I can already guess whatever the encryption method is it isn't very good. Good encryption should give you no idea of what the data is or its form, and should have no apparent patterns in it.
When I open up one of the image files it seems pretty much like random garbage, as is expected. I don't expect to find much in them but i run them through the unix `strings` command anyway. All of a sudden, lots of series of the same ASCII characters tumble out:
"CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7AUSCT-CHE7"
The first thing I think is: Holy Shit. I know immediately what i'm looking at. Since my early days experimenting with XOR encryption I learned two simple principles: one, never repeat the key you're encrypting with, and two, never allow NULL characters in your data! The second applies because XORing anything against NULL (all 0 bits) just outputs the encryption key. As most people familiar with binary files know, they are absolutely rife with null characters, often in series and in large blocks. What I was looking at above was a huge block of the encryption key repeating over and over.
But could it really be that easy? I wrote a quick Perl script to test it.
#!/usr/bin/perl
# decrypt.pl - pwn a stupid firmware encryption
# Copyright (C) 2009 Peter Willis
#
# So here's the deal. I noticed a repeating pattern in the encrypted filesystem
# image of the firmware for this TV. One ASCII string repeating over and over,
# and only partly in other places. From experience with basic XOR encryption you
# may know that you can "encrypt" data by simply taking chunks of unencrypted
# data the same length as your encryption key and using an xor operation. The
# problem is, if there are any 'nulls' in your input data your key is going to
# be shown to the world in the output. My theory is this is what happened here.
use strict;
# This is the string that might be the encryption key. We don't know if this is
# the correct order of the key, only that these characters repeat themselves.
#my $possiblekey = "HE7AUSCT-";
#
# However, by looking at the input again, we know the repeating string is 10 chars
# long. Counting the number of bytes from the beginning of the file to this string
# 10 at a time we can assume the real string's order is:
my $possiblekey = "T-CHE7AUSC";
die "Usage: $0 FILE\nDecrypts FILE with a possible key \"$possiblekey\"\n" unless @ARGV;
$|=1;
open(FILE, "<$ARGV[0]") || die "Error: $!\n";
# Make sure we read an amount of bytes divisible by the length of the key or
# we would mess up our xors
while ( sysread(FILE, my $buffer, length($possiblekey) * 100) ) {
for ( my $i=0; $i < length($buffer); $i+= length($possiblekey) ) {
my $chunk = substr($buffer, $i, length($possiblekey));
print $chunk ^ $possiblekey;
}
}
close(FILE);
And we run it on the script to test the theory:
pwillis@bobdobbs ~/Downloads/bar/T-CHE7AUSC/ :) ./decrypt.pl run.sh.enc
#!/bin/sh
PROJECT_TAG=`cat /.info`
WRITE_IMAGE()
{
if [ -e $2 ] ; then
echo "==================================="
echo "$1 erase & extract & download!!"
echo "==================================="
$ROOT_DIR/ddcmp -d -i $2 -o $3
sync
echo "===============DONE================"
elif [ -e $2.enc ] ; then
echo "==================================="
echo "$1 erase & extract & download!![Enc]"
echo "==================================="
$ROOT_DIR/ddcmp -e $PROJECT_TAG -i $2.enc -o $3
sync
echo "===============DONE================"
fi
}
As you can see, the script decoded beautifully with this key on the first try. The other two encrypted files also decode fine. It turns out "exe.img.enc" is a FAT filesystem image with an x86 boot sector (obviously for 'dd'ing to some storage device on the TV). The "appdata.img.enc" file is a Squashfs filesystem.
It took about 20 minutes for me to download and decode this supposedly-encrypted firmware image. This is the lesson: use a real tool for encryption. Do not think you know how to do it yourself. And don't waste your time trying to obfuscate a filesystem image from me; i'll just crack open the TV and dump the flash ROM.
For extra fun: note the name of the directory the executable was originally extracted into. :-)
Friday, January 29, 2010
why you should not trim your system install
I happen to think it's *mostly* pointless to trim the install of a system's packages. When I install a system, be it a desktop, server, development machine, etc I install all available packages for that distro. A lot of people disagree with me. They usually say:
For a VPS or some other disk-and-bandwidth-limited host it's obvious that trimming packages will save you on both of your limited resources. But on a normal network with multiple hosts and plenty of storage I wouldn't spend a lot of time time tweaking my kickstart packages list.
- "All those packages take up space!"
Go buy a hard drive made this decade. And while you're at it, stop partitioning your kicks with 2GB /usr partitions and 500MB /tmp partitions. If your disk is full it's full; there's no benefit in letting it fill up sooner than later. Your filesystem should have been created with at least a 1% reserve for root only, which will allow you to log in and fix the issue (unless you are running filesystem-writing apps as root; you're not, right?) not to mention the system monitors you use to tell you before the disk fills up. - "But it's a security risk!"
Do you really think your system is more secure because it lacks some binary files? While you're spending time trimming your package list, you're forgetting the basics of system security like firewalling, disabling services, checking the filesystem for overly-permissive files/directories, setuids, etc. Just because you didn't install that setuid kppp doesn't mean there isn't a hole somewhere else on your system. Do a proper audit of your system once everything is installed. This will eliminate typical system attacks and you'll be secure enough to handle exploits in userland apps. - "It takes extra time to update all those packages!"
Is your network that slow? Even if you upgraded all of KDE or Gnome it shouldn't take but a couple minutes to download the updated packages. Of course you were a good admin and you have a kickstart repository on the LAN of each machine (or accessible a hop or two away) so the bandwidth should be immaterial. - "Yum/apt will take care of the extra packages if you need to install something later."
Oh boy! Let's talk YUM, shall we? First of all it's one of the shittiest pieces of vendor-approved package managing/updating software ever. Read the source if you dare (and if you can). The only thing that's more retarded than its code is how retarded it is to have to troubleshoot YUM when it doesn't do what you want to do. Let's go down the checklist:- Run `yum clean all`
- Check that the package's --requires exist in packages in the repo
- Check that the 'meta' arch of the package matches the arch of the machine
- Make sure there isn't a duplicate package with a different arch in the repo
- Make sure there isn't a package with a similar name but higher epoch in the repo
- Make sure the name is the same
- Make sure the version is higher and has the same exact format as any other package with the same name
- Make sure the metadata in the repo is up to date, and re-gen it just to be sure
- Do a `yum clean all` again
- Sacrifice a goat to the Yum maintaners
- Rename your first born to 'Yellowdog'
- Etc
Usually someone pushing a bad package or a dependency of a package that used to work will be what breaks Yum. It'll go unnoticed until you really really need that package and its dependencies installed. Then you'll spend hours (and sometimes days) trying to get it installed and fix whatever was broken with rpm/Yum. Whereas if you had installed everything right after your kick, the package would just be there, ready for use. You should only use something newer than what came with your kick if you really really need it.
Of course experience teaches us the folly of trusting any update to an rpm. Whenever you push a new package you must test it on the host it'll be installed on. The package itself may not install correctly via Yum (though using just RPM would probably work), or there could be some other problem with the contents of the package that you'd only know by running the programs contained in the package on the target host. Because we do this, we don't need Yum to browbeat us every time the RPM (or something else) isn't 100% to its liking. If you just install packages en-masse and test them you can skip the whole process of troubleshooting Yum and skip right to troubleshooting the package itself on the host it's intended for, which we'd be doing anyway with Yum.
For a VPS or some other disk-and-bandwidth-limited host it's obvious that trimming packages will save you on both of your limited resources. But on a normal network with multiple hosts and plenty of storage I wouldn't spend a lot of time time tweaking my kickstart packages list.
Friday, January 22, 2010
hype of the century
There's never going to be an end to the ridiculous hyperbole surrounding new, expensive, fashionable technology. It never matters if the thing they're hyping is actually good. I think this report sums up exactly the kind of situation I see all the time from the masses of ignorant media fiends.
However, there does seem to be a kind of peak that is hard to reach again. I'd like to tell you all about the biggest peak to date: the iPhone. Seems you can't talk about the iPhone in a negative context without someone bringing up the point that no matter what, I have to agree, the iPhone "changed the world". WELL THEN. Let's just take a look at this entrancing device and see if this assertion may be true?
First of all, I don't see any marketing blitzes in Rwanda or Haiti or the North Pole for this shiny hunk of metal and silicon. In fact, when it was first released the device was only accessible by those with a considerable amount of money and a specific geographic location and mobile service provider (AT&T). Over the years they've released the iPhone officially in other territories and it's possible to purchase one unlocked for other carriers. <edit> You can also now buy the iPhone locked for dozens of carriers around the world. But the device is not universal to all carriers and nations. </edit> This kind of access does not change the world. Maybe they meant to say it "changed the PART OF THE world WHERE I LIVE AND MY ACCESS TO MOBILE CONTENT ON ONE CARRIER". That may be true enough, but that's not what they actually say.
Did it change the industry? Perhaps... The idea of a manufacturer/producer of a phone dictating the terms of how the phone operates and even taking a cut of the subscription profits was certainly a hybrid business model. But did it change the industry? Thus far, no other vendor has accomplished such a feat. With the release of the Nexus One from Google we see an unlocked phone provided on multiple carriers whose operating system is solely controlled by Google. This is probably the second time I know of that a carrier's dictatorial domination of a device has been stripped away. But the iPhone was released three years ago. It took 3 years to begin to change the industry? What took so long? You can't say Apple's original hybrid business model "changed the industry" because the industry remained the same - only one schmuck corporation (AT&T) went along with the idea of total vendor lock-in.
If anything, the iPhone has influenced the way the industry builds out its services. I'm willing to bet that the volume of data is starting to surpass that of the volume of voice traffic. Text messages already replace most short conversations, and one day perhaps the voice channels will all be replaced by a single digital link with VoIP connecting users to providers. There's no reason why in the future you couldn't pick a different long-distance carrier than your "mobile ILEC", similar to land-line phone call routing for the past however many decades. It's obvious AT&T can't keep up with the current flow of data, however, and other carriers must see the need to expand their capacity.
Nobody rational or educated would argue that the iPhone ushered in a new era of smart phones. Smart phones had all of the features of the iPhone and more for years. Granted, those phones were usually high-priced unlocked devices more for the early adopters with green falling out of their pockets. The iPhone itself wasn't exactly cheap, starting out at $399 (and $499 for the 16GB version) plus a 2-year contract. OUCH. (To contrast that, the extremely capable Nokia N95 was around $500 at release time in the same year - unlocked with no contract). The phone even lacked basic features like Bluetooth profiles for input devices (or virtually any other useful profile, including that for wireless stereo headsets). It couldn't send picture messages, it couldn't copy-and-paste... Other than the multitouch interface it wasn't revolutionary in terms of technical gadgetry. You could get more done with a brick-style phone on any carrier than you could with the iPhone.
The one thing you could say was a game-changer was the App Store. Apps for phones is nothing new. Ever since Java became the "operating system" for phones in the early 90's people have been making custom apps and selling them for big bucks world-wide. But there was never an easy way to just look for, pay for and install any given app. The App Store made them accessible to any user at all times. This then also brought more developers in, and with the fast processor and moderately-fast bandwidth many apps were brought about to bring new kinds of content to the device. There were a couple other "app store"-style websites around, but nothing that tied directly into a phone. Google's Android followed suit with their own app store years later, and Microsoft is just now starting to get into the game.
In the end, we now have an industry saturated with look-alike devices, many of which provide more features and functionality than the iPhone itself. But they will never surpass the iPhone in terms of sales or user base. And the reason comes directly from Apple's ubiquitous business model of total lock-in. Control the hardware + control the software = control of the users. At this point, any device that tries to come in and "shake up the game" will be nothing but a distraction for the uninformed random user who stumbles into a carrier's brick-and-mortar trying to be told what they should buy. Most Web 2.0-savvy users will be looking for a phone that supports the "apps" of a particular service or web site, and the only two options today seem to be iPhone or Android. Some people still make Blackberry apps, but that's probably going to become niche for apps which cater to business or corporate users and less for the general public. So an iPhone pretender will always be that, and never become as successful - until they have their own App Store that can compete.
If you're quite done drinking my Haterade, i'll admit that the iPhone is nice. At this point it's the cheapest possible smart phone that provides such a feature set, support from developers and an incomparable user base. But world-changing? Ask anyone today who doesn't have an iPhone if their world seems different since 2007. They'll probably tell you about how the economy is fucked and the banks are running our government, and thank god we have a new President (or holy jesus we're all fucked because of the new President). But if you ask them to list the top 10 things that have changed their world since then, the iPhone won't be among them. Because most people don't give a shit. Their mobile phone needs have been met. And other than general curiosity in high tech gadgetry, they won't have the need to buy into something else.
However, there does seem to be a kind of peak that is hard to reach again. I'd like to tell you all about the biggest peak to date: the iPhone. Seems you can't talk about the iPhone in a negative context without someone bringing up the point that no matter what, I have to agree, the iPhone "changed the world". WELL THEN. Let's just take a look at this entrancing device and see if this assertion may be true?
First of all, I don't see any marketing blitzes in Rwanda or Haiti or the North Pole for this shiny hunk of metal and silicon. In fact, when it was first released the device was only accessible by those with a considerable amount of money and a specific geographic location and mobile service provider (AT&T). Over the years they've released the iPhone officially in other territories and it's possible to purchase one unlocked for other carriers. <edit> You can also now buy the iPhone locked for dozens of carriers around the world. But the device is not universal to all carriers and nations. </edit> This kind of access does not change the world. Maybe they meant to say it "changed the PART OF THE world WHERE I LIVE AND MY ACCESS TO MOBILE CONTENT ON ONE CARRIER". That may be true enough, but that's not what they actually say.
Did it change the industry? Perhaps... The idea of a manufacturer/producer of a phone dictating the terms of how the phone operates and even taking a cut of the subscription profits was certainly a hybrid business model. But did it change the industry? Thus far, no other vendor has accomplished such a feat. With the release of the Nexus One from Google we see an unlocked phone provided on multiple carriers whose operating system is solely controlled by Google. This is probably the second time I know of that a carrier's dictatorial domination of a device has been stripped away. But the iPhone was released three years ago. It took 3 years to begin to change the industry? What took so long? You can't say Apple's original hybrid business model "changed the industry" because the industry remained the same - only one schmuck corporation (AT&T) went along with the idea of total vendor lock-in.
If anything, the iPhone has influenced the way the industry builds out its services. I'm willing to bet that the volume of data is starting to surpass that of the volume of voice traffic. Text messages already replace most short conversations, and one day perhaps the voice channels will all be replaced by a single digital link with VoIP connecting users to providers. There's no reason why in the future you couldn't pick a different long-distance carrier than your "mobile ILEC", similar to land-line phone call routing for the past however many decades. It's obvious AT&T can't keep up with the current flow of data, however, and other carriers must see the need to expand their capacity.
Nobody rational or educated would argue that the iPhone ushered in a new era of smart phones. Smart phones had all of the features of the iPhone and more for years. Granted, those phones were usually high-priced unlocked devices more for the early adopters with green falling out of their pockets. The iPhone itself wasn't exactly cheap, starting out at $399 (and $499 for the 16GB version) plus a 2-year contract. OUCH. (To contrast that, the extremely capable Nokia N95 was around $500 at release time in the same year - unlocked with no contract). The phone even lacked basic features like Bluetooth profiles for input devices (or virtually any other useful profile, including that for wireless stereo headsets). It couldn't send picture messages, it couldn't copy-and-paste... Other than the multitouch interface it wasn't revolutionary in terms of technical gadgetry. You could get more done with a brick-style phone on any carrier than you could with the iPhone.
The one thing you could say was a game-changer was the App Store. Apps for phones is nothing new. Ever since Java became the "operating system" for phones in the early 90's people have been making custom apps and selling them for big bucks world-wide. But there was never an easy way to just look for, pay for and install any given app. The App Store made them accessible to any user at all times. This then also brought more developers in, and with the fast processor and moderately-fast bandwidth many apps were brought about to bring new kinds of content to the device. There were a couple other "app store"-style websites around, but nothing that tied directly into a phone. Google's Android followed suit with their own app store years later, and Microsoft is just now starting to get into the game.
In the end, we now have an industry saturated with look-alike devices, many of which provide more features and functionality than the iPhone itself. But they will never surpass the iPhone in terms of sales or user base. And the reason comes directly from Apple's ubiquitous business model of total lock-in. Control the hardware + control the software = control of the users. At this point, any device that tries to come in and "shake up the game" will be nothing but a distraction for the uninformed random user who stumbles into a carrier's brick-and-mortar trying to be told what they should buy. Most Web 2.0-savvy users will be looking for a phone that supports the "apps" of a particular service or web site, and the only two options today seem to be iPhone or Android. Some people still make Blackberry apps, but that's probably going to become niche for apps which cater to business or corporate users and less for the general public. So an iPhone pretender will always be that, and never become as successful - until they have their own App Store that can compete.
If you're quite done drinking my Haterade, i'll admit that the iPhone is nice. At this point it's the cheapest possible smart phone that provides such a feature set, support from developers and an incomparable user base. But world-changing? Ask anyone today who doesn't have an iPhone if their world seems different since 2007. They'll probably tell you about how the economy is fucked and the banks are running our government, and thank god we have a new President (or holy jesus we're all fucked because of the new President). But if you ask them to list the top 10 things that have changed their world since then, the iPhone won't be among them. Because most people don't give a shit. Their mobile phone needs have been met. And other than general curiosity in high tech gadgetry, they won't have the need to buy into something else.
Monday, January 11, 2010
safe subversion backup with rsync
Let's say you have a subversion repository and you want to keep a backup on a remote host. Doing a "cp -R" is unsafe as is mentioned here, so your two safe methods of copying a subversion repo are 'svnadmin hotcopy' and 'svnadmin dump'. The former only makes a local copy, but the latter creates a single dumpfile on stdout, which is the most flexible method (though it does not grab repo configs).
The simple method to back up the repo from one host to another would be the following:
That would create an identical copy of remote-host's repository on the local host. However, for big subversion repositories this could take a lot of time and bandwidth. Here is the form to use disk space to save time and bandwidth:
This makes a dump on the remote host and rsync's the difference of the dump file to the local host. Note that the dump file is uncompressed and rather large, so if you have lots of spare cycles you can pipe the output of 'svnadmin dump' into 'lzma -c' and the opposite for 'svnadmin load'. (The rsync '-z' flag uses gzip compression, but lzma will save you much more space and thus possibly more time)
edit lol, i just realized compressing before rsync is probably pointless, other than reducing the size on local disk. also 'svnadmin hotcopy' is probably the exact same if not better than dumping to a local file vs piping from one host to another (and saves the config).
The simple method to back up the repo from one host to another would be the following:
- ssh user@remote-host "svnadmin dump REPOSITORY" | svnadmin load REPOSITORY
That would create an identical copy of remote-host's repository on the local host. However, for big subversion repositories this could take a lot of time and bandwidth. Here is the form to use disk space to save time and bandwidth:
- ssh user@remote-host "svnadmin dump REPOSITORY > REPO.dump" && rsync -azP user@remote-host:REPO.dump REPO.dump && svnadmin load REPOSITORY < REPO.dump
This makes a dump on the remote host and rsync's the difference of the dump file to the local host. Note that the dump file is uncompressed and rather large, so if you have lots of spare cycles you can pipe the output of 'svnadmin dump' into 'lzma -c' and the opposite for 'svnadmin load'. (The rsync '-z' flag uses gzip compression, but lzma will save you much more space and thus possibly more time)
edit lol, i just realized compressing before rsync is probably pointless, other than reducing the size on local disk. also 'svnadmin hotcopy' is probably the exact same if not better than dumping to a local file vs piping from one host to another (and saves the config).
Subscribe to:
Posts (Atom)