you know that familiar problem where your development environment isn't in sync with the production servers? you upgrade the software so you can test new features, you push to production, and HOLY SHIT the site breaks because you weren't developing on the same platform. that is, if you aren't already doing the fucking upgrade polka just trying to get the newer software shoehorned into the old-ass legacy PROD machine. here's a potential fix.
you keep your codebase locked to your system configuration. if a dev server's software changes (or really, if anything on the system changes), you take a snapshot and you force the code to be tagged or whatever to the new snapshot. in this way the code is always in line with any given system configuration so you know what code works with what system configuration.
also, you always keep a system that matches your production machine. your HEAD development tree may be a wild jungle but only code you commit to the branch that is the same as the production machine can be tested, and only the development machine with the exact same system configuration as the production machine can test the code. so you will have your HEAD dev system and your PROD dev system, and the PROD dev system will mirror the PROD machine, not the other way around. you can call this "QC"/"QA" if you want but dev systems usually have local edits and don't do normal deployments and other bullshit bit rot creep.
so on the HEAD machine developers can test whatever the fuck they want, but until it works on the PROD dev machine it can't be deployed to PROD. this will also force you to actually do unit testing and other fun shit to prove the PROD dev code works as expected before you can deploy it. yay!
I'm a hacker and IT worker, and this is where I muse about broken enterprise networks, broken code and weird security holes (among other things)
Wednesday, December 28, 2011
Wednesday, December 21, 2011
FUCK ANDROID
AAARRRGHHH.
I need to make a phone call to a local business right now. I can't, because every time I dial the number and press 'Dial' the "3G Data" option window pops up. It simply will not dial the number. I have put up with your shit, Android, and i'm done with it.
This isn't the first problem you've had. Your apps seem to crash daily, or one app sucks up "35% CPU" and makes every other app lag like my grandmother in molasses. Stock apps like the Browser and Maps can bring the whole thing to it's knees. And in these weird states, on the few times I actually receive a phone call, I can't swipe to answer it because the UI is too lagged. Let's not even talk about the native text messaging, which is not only the laggiest SMS i've ever used but the first phone to actually fail to send SMS's on a regular basis.
Google's apps in particular seem to suck. Google Voice takes about 30 seconds just to refresh the text messages once I open the app. Maps has weird bugs so that if I lock the screen while viewing the map, the map freezes and I have to kill it and restart it. Randomly the whole device will just appear sluggish even if I haven't been using it. And some apps become impossible to uninstall, becoming nag-ware for registration or payment.
A PHONE SHOULD BE A PHONE. All I wanted was to have GPS, Maps and Browsing built into my phone, and maybe a nice camera (but years later apparently Sony is the only company capable of putting a decent camera in a phone). But that was too much for you, Android. You had to be fancy. And now i'm throwing you away.
I need to make a phone call to a local business right now. I can't, because every time I dial the number and press 'Dial' the "3G Data" option window pops up. It simply will not dial the number. I have put up with your shit, Android, and i'm done with it.
This isn't the first problem you've had. Your apps seem to crash daily, or one app sucks up "35% CPU" and makes every other app lag like my grandmother in molasses. Stock apps like the Browser and Maps can bring the whole thing to it's knees. And in these weird states, on the few times I actually receive a phone call, I can't swipe to answer it because the UI is too lagged. Let's not even talk about the native text messaging, which is not only the laggiest SMS i've ever used but the first phone to actually fail to send SMS's on a regular basis.
Google's apps in particular seem to suck. Google Voice takes about 30 seconds just to refresh the text messages once I open the app. Maps has weird bugs so that if I lock the screen while viewing the map, the map freezes and I have to kill it and restart it. Randomly the whole device will just appear sluggish even if I haven't been using it. And some apps become impossible to uninstall, becoming nag-ware for registration or payment.
A PHONE SHOULD BE A PHONE. All I wanted was to have GPS, Maps and Browsing built into my phone, and maybe a nice camera (but years later apparently Sony is the only company capable of putting a decent camera in a phone). But that was too much for you, Android. You had to be fancy. And now i'm throwing you away.
Nobody should be making Web Apps
let's face it. you're doing it wrong. it's not your fault; the rest of the world told you it was okay to try to emulate every aspect of a normal native application, but in a web browser. don't fret, because i'm about to explain why everything you do is wrong and how you can fix it. but first let me ask you a question.
what do you want to accomplish?
A. producing a markup-driven portable readable user agent-independent interpreted document to present information to a user?
B. letting a user interact with a custom application with which you provide services and solutions for some problem they have for which they don't have a tool to solve it?
if it's A you chose the correct platform. a web browser is designed to retrieve, present and traverse information through a vast network of resources. it has the flexibility, speed and low cost of resources to let you pull tons of content quickly and easily. after all, we all have at least a gigabyte of RAM. you should be able to browse hundreds of pages and never max out that amount of RAM - right?
if it's B this is the wrong choice, and for a simple reason: a browser is not an application platform. it was never designed to provide for you all the tools you need to support the myriad of applications' needs. imagine all the components of an operating system and what it provides to allow simple applications to do simple things. now consider a web browser and what it provides. starting to get the picture? here's a simple comparison: an operating system is a fortune 500 company and a web browser is a guy with a lemonade stand. no matter how many 'features' he can sell you, the super low-calorie healthy organic sweetener, the water sourced from natural local clean purified streams, whatever: it's still lemonade.
technical reasons why web apps are dumb:
i know what you're saying: what the hell else am i supposed to do? make native apps? i would compare the smartphone mobile app market to the desktop app market but the truth is it's ridiculously easier to bring in customers for mobile apps. and yes it's probably ten times easier building web apps with all the fancy friendly frameworks that can be tied together to push out new complete tools in hours instead of days or weeks. but that's also no excuse because it's all just code; we could build easy frameworks for native or mobile apps too. what is the alternative? is there one?
i don't think there is. Yet. you see, where the web browser fails us we have an opportunity to create a new kind of application. something that's dynamic and ubiquitous yet conforms to standards. something easy to deploy, cross-platform and portable. something using tools and libraries implemented in fast native code. something with an intuitive interface that exposes a universal "store front" to download or buy applications to suit our needs. something local AND scalable. sounds like a pipe dream.
maybe we can't have everything. but i see pieces of this idea in different places. when i look at Steam i see most of what's necessary for a store for applications, a content delivery system, a (mostly) secure user authentication mechanism. if it were possible to take the simplicity of Python (but you know, without the annoying parts) and make it reeeeallly cross-platform by design, then produce simple frameworks to speed up building of new complete tools.
the last thing you'd need is a way to make it sexy enough for everyone to pick up and start using. there's the difficult part. it seems to me that the competition of a few major players and the evolution of standards for new web technology is what led the arms race to bring "web apps" as the most ubiquitous computing platform for user interaction (next to mobile apps). that and the trendy, almost generation-specific explosion of investment of time in javascript-based frameworks led everyone to just build web apps by default. the new solution has to be needed for something for anyone to pick it up. you could start it as a browser pet project, but it seems uncertain whether other browsers would pick up the technology or wait it out.
this is where my sleep deprivation and the hour's worth of work i need to put in makes me ramble more than usual. my main point here is: make it easy, make it convenient, and make it somehow better than what we've had before. the end goal is of course, to stop creating bloated-ass crazy insecure web browsers that threaten our financial and personal lives and instead make stable, powerful applications which don't need a specific kind of browser or class of machine to run (necessarily).
bottom line: browsers aren't an operating system and the world wide web is not the internet. the former merely is part of the latter.
(disclaimer: i don't write web apps)
what do you want to accomplish?
A. producing a markup-driven portable readable user agent-independent interpreted document to present information to a user?
B. letting a user interact with a custom application with which you provide services and solutions for some problem they have for which they don't have a tool to solve it?
if it's A you chose the correct platform. a web browser is designed to retrieve, present and traverse information through a vast network of resources. it has the flexibility, speed and low cost of resources to let you pull tons of content quickly and easily. after all, we all have at least a gigabyte of RAM. you should be able to browse hundreds of pages and never max out that amount of RAM - right?
if it's B this is the wrong choice, and for a simple reason: a browser is not an application platform. it was never designed to provide for you all the tools you need to support the myriad of applications' needs. imagine all the components of an operating system and what it provides to allow simple applications to do simple things. now consider a web browser and what it provides. starting to get the picture? here's a simple comparison: an operating system is a fortune 500 company and a web browser is a guy with a lemonade stand. no matter how many 'features' he can sell you, the super low-calorie healthy organic sweetener, the water sourced from natural local clean purified streams, whatever: it's still lemonade.
technical reasons why web apps are dumb:
- in a very literal sense the browser is becoming Frankenstein. slow, kludgy, gigantic, unstable, a security risk.
- verifying if my credit card number was typed in correctly is fine, but javascript should never run actual applications or libraries.
- applications that can interact with the local machine natively can do a wide array of things limited only by your own security policies and the extent of your hardware and installed libraries (which can be bundled with apps). web apps have to have the right browser installed, the right version, and compete with whatever other crap is slowly churning away, restricted by hacked-on browser security policies designed to keep your browser from hurting you.
- web applications are not only sensitive to the user's browser & network connection, they require your server backend to provide most of the computation resources. now not only can a user not rely on the application as much, you have to put up the cost of their cpu & network time, which is much more difficult than it is expensive when you really start getting users.
- the user doesn't really give a shit how their magical box provides them what they want. they just want it immediately and forever and free. so you're not really tied to using the web as long as you can provide them the same experience or better.
- seriously - Web Sockets?! are you people fucking insane? why not a Web Virtual Memory Manager or Web Filesystems? or how about WebDirectX? ..... oh. nevermind. *headdesk* i can't wait for Real-Time Web Pages.
i know what you're saying: what the hell else am i supposed to do? make native apps? i would compare the smartphone mobile app market to the desktop app market but the truth is it's ridiculously easier to bring in customers for mobile apps. and yes it's probably ten times easier building web apps with all the fancy friendly frameworks that can be tied together to push out new complete tools in hours instead of days or weeks. but that's also no excuse because it's all just code; we could build easy frameworks for native or mobile apps too. what is the alternative? is there one?
i don't think there is. Yet. you see, where the web browser fails us we have an opportunity to create a new kind of application. something that's dynamic and ubiquitous yet conforms to standards. something easy to deploy, cross-platform and portable. something using tools and libraries implemented in fast native code. something with an intuitive interface that exposes a universal "store front" to download or buy applications to suit our needs. something local AND scalable. sounds like a pipe dream.
maybe we can't have everything. but i see pieces of this idea in different places. when i look at Steam i see most of what's necessary for a store for applications, a content delivery system, a (mostly) secure user authentication mechanism. if it were possible to take the simplicity of Python (but you know, without the annoying parts) and make it reeeeallly cross-platform by design, then produce simple frameworks to speed up building of new complete tools.
the last thing you'd need is a way to make it sexy enough for everyone to pick up and start using. there's the difficult part. it seems to me that the competition of a few major players and the evolution of standards for new web technology is what led the arms race to bring "web apps" as the most ubiquitous computing platform for user interaction (next to mobile apps). that and the trendy, almost generation-specific explosion of investment of time in javascript-based frameworks led everyone to just build web apps by default. the new solution has to be needed for something for anyone to pick it up. you could start it as a browser pet project, but it seems uncertain whether other browsers would pick up the technology or wait it out.
this is where my sleep deprivation and the hour's worth of work i need to put in makes me ramble more than usual. my main point here is: make it easy, make it convenient, and make it somehow better than what we've had before. the end goal is of course, to stop creating bloated-ass crazy insecure web browsers that threaten our financial and personal lives and instead make stable, powerful applications which don't need a specific kind of browser or class of machine to run (necessarily).
bottom line: browsers aren't an operating system and the world wide web is not the internet. the former merely is part of the latter.
(disclaimer: i don't write web apps)
Labels:
i was tired and angry,
rants,
sorry about this
Wednesday, November 2, 2011
what nobody using Linux gets about usability
nobody cares what makes your operating system work. they don't want to perform steps, or look up guides, or learn how to do something. everything they want to do should be completely intuitive.
in Linux, it is never intuitive.
programs are usually some weird alternative program with a strange name that isn't intuitive at all. Linux and Mac have cross-platform applications which everyone just knows about. Linux doesn't, in general. i mean sure there's apps, but nobody in Windows or Mac uses open source programs. think about that for a minute.
you always need to know what kind of 'package' to download, and some weird method of installing it, and possibly 'dependencies'. nobody in Windows knows what the fuck a dependency is, or why they'd ever come into contact with one.
nobody in Windows ever uses a console. "What is this, DOS?" it's fucking retarded. the mere idea that a shell even *exists* on a Linux computer is kind of ridiculous. the idea that command-line programs are so robust and friendly to a command-line user, just serves as a crutch for the techies who have no easy way to do the same thing from the GUI.
the motto of Linux development should be: No More Console.
the second motto of Linux should be: No More Knowledge.
a complete idiot should be able to use a computer with no more than 2 minutes of playing around. it should also be the least scary as possible for them. usually Linux GUIs are incredibly complex and scary for users. they also usually look incredibly toy-like and cheap, but that's not really a usability thing.
Linux should work like the iPad.
think about that.
in Linux, it is never intuitive.
programs are usually some weird alternative program with a strange name that isn't intuitive at all. Linux and Mac have cross-platform applications which everyone just knows about. Linux doesn't, in general. i mean sure there's apps, but nobody in Windows or Mac uses open source programs. think about that for a minute.
you always need to know what kind of 'package' to download, and some weird method of installing it, and possibly 'dependencies'. nobody in Windows knows what the fuck a dependency is, or why they'd ever come into contact with one.
nobody in Windows ever uses a console. "What is this, DOS?" it's fucking retarded. the mere idea that a shell even *exists* on a Linux computer is kind of ridiculous. the idea that command-line programs are so robust and friendly to a command-line user, just serves as a crutch for the techies who have no easy way to do the same thing from the GUI.
the motto of Linux development should be: No More Console.
the second motto of Linux should be: No More Knowledge.
a complete idiot should be able to use a computer with no more than 2 minutes of playing around. it should also be the least scary as possible for them. usually Linux GUIs are incredibly complex and scary for users. they also usually look incredibly toy-like and cheap, but that's not really a usability thing.
Linux should work like the iPad.
think about that.
Tuesday, October 25, 2011
Getting wifi to work on an Asus Eee PC 1015PEM
This asus has a Broadcom BCM4313 wifi card. The linux kernel that ships with Slackware 13.37 comes with an open-source driver for this wifi card. Unfortunately, it does not come with the firmware for the card, so the driver is useless. If you try to download and install the driver from Broadcom it crashes the machine (even if you blacklist every other module).
The solution is to download and install the firmware according to the driver's README (which you can find at /usr/src/linux-2.6.37.6/drivers/staging/brcm80211/README). Once the drivers are copied to /lib/firmware/brcm/ you need to make symlinks "bcm43xx-0.fw" and "bcm43xx_hdr-0.fw" to the files which look closest like those.
Of course the git repository that has the files is not working, so you have to pull them from somewhere else. You can get the archive from debian here: http://packages.debian.org/sid/firmware-brcm80211 (download the source package's .tar.gz file and extract it)
Once you set up the firmware, just reboot and the machine should attempt to load the brcm80211 module and the right firmware automatically. Don't use the "wl" driver from broadcom as it will crash the machine. Add "wl", "b43" and "ssb" to the /etc/modprobe.d/blacklist.conf file just in case it tries to load those.
UPDATE
Apparently, that stuff doesn't work the way it should. You have to upgrade to the latest 2.6 kernel (2.6.39.4 as of this writing) and load the 'brcmsmac' driver, as this is the new driver used by the BCM4313 on the latest 2.6 kernels. Blacklist all the other drivers first ('wl', 'brcm80211', 'b43', 'ssb', 'b43-legacy', 'bcma'). I'm not sure if this is because the latest firmware is incompatible with older driver versions, but it kept crashing my machine to do anything but use the latest kernel and the brcmsmac driver with the latest firmware. What a pain in the ass.
The solution is to download and install the firmware according to the driver's README (which you can find at /usr/src/linux-2.6.37.6/drivers/staging/brcm80211/README). Once the drivers are copied to /lib/firmware/brcm/ you need to make symlinks "bcm43xx-0.fw" and "bcm43xx_hdr-0.fw" to the files which look closest like those.
Of course the git repository that has the files is not working, so you have to pull them from somewhere else. You can get the archive from debian here: http://packages.debian.org/sid/firmware-brcm80211 (download the source package's .tar.gz file and extract it)
Once you set up the firmware, just reboot and the machine should attempt to load the brcm80211 module and the right firmware automatically. Don't use the "wl" driver from broadcom as it will crash the machine. Add "wl", "b43" and "ssb" to the /etc/modprobe.d/blacklist.conf file just in case it tries to load those.
UPDATE
Apparently, that stuff doesn't work the way it should. You have to upgrade to the latest 2.6 kernel (2.6.39.4 as of this writing) and load the 'brcmsmac' driver, as this is the new driver used by the BCM4313 on the latest 2.6 kernels. Blacklist all the other drivers first ('wl', 'brcm80211', 'b43', 'ssb', 'b43-legacy', 'bcma'). I'm not sure if this is because the latest firmware is incompatible with older driver versions, but it kept crashing my machine to do anything but use the latest kernel and the brcmsmac driver with the latest firmware. What a pain in the ass.
Monday, October 24, 2011
pNFS is in Linux 3.x!
I totally missed it, but pNFS is officially in Linux 3.0 and beyond. If you need simple, stable, parallel network filesystem that is included with vanilla Linux kernels, now you have it. Any NFS 4.1 compatible client should be able to use servers set up with pNFS.
Here's the docs I found so far on it:
http://wiki.linux-nfs.org/wiki/index.php/PNFS_Setup_Instructions
http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd
http://wiki.linux-nfs.org/wiki/index.php/PNFS_Block_Server_Setup_Instructions
http://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_Setup
Here's the docs I found so far on it:
http://wiki.linux-nfs.org/wiki/index.php/PNFS_Setup_Instructions
http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd
http://wiki.linux-nfs.org/wiki/index.php/PNFS_Block_Server_Setup_Instructions
http://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_Setup
Thursday, October 20, 2011
most new startup companies are stupid
Let me go down the list of some kinds of startup companies from Start-Up 100 and why they're stupid.
Advertising and Marketing
Right off the bat, a bunch of useless bullshit. I don't ever WANT to see an advertisement or marketing. I want to be able to find that shit if i'm looking for it, but I don't want any of it to just show up somewhere.
Audio and Media
More stupid "web 2.0" websites based around music and other crap. Internet radio has existed for well over a decade. I don't need another place to not find the music I want to hear. (Pandora sucks, Spotify sucks, Grooveshark sucks... it all sucks. I'll turn on Shoutcast or Last.FM Radio if I want random music that I kind of like)
Education, Recruitment and Jobs
First of all, if you didn't get a normal education, some web 2.0 shell of a company probably isn't going to educate you any better. We have Dice and Monster, and people who are competent at what they do will network and find jobs in person like normal.
Enterprise: Security, Storage, Collaboration, Databases
Finally, startups intended for technology. Too bad all of them suck. Most people i've seen who try to develop startups have not worked very long in tech so they design or implement poorly and if they survive it's from sheer luck. Most of these solutions are crap or unnecessary.
Finance, Payments and Ecommerce
Again, i'm pretty sure all the big contenders have already been created. It'd be interesting if they actually had a new way to deal with finance or ecommerce, but most of it's been done and there's not a lot of room for innovation.
Gaming, Virtual Worlds
Ok, here's something that actually has promise. Make a stupid game which is addicting and make a billion dollars like Rovio.
Social Networking and Collaboration
JUST. LET. IT. DIE.
Social networking is a fad. You know what the original social network was? AOL. Just let the shit die. God I hate social networks.
Travel and Transport
I guess there's still a few niche/boutique businesses you could start in this space. But if it's another "how to look up cheap flights" website, just kill yourself.
What I would like to see more of are startups that are intended on bettering mankind, or fixing a common problem, or pioneering a new technology (a *real* new technology, not just a new shitty website or NoSQL garbage tool nobody wants). Medical device startups are really cool. Startups that develop technology for the 3rd world are cool. I'm still waiting on somebody to build a company that just services new companies, giving them turn-key solutions to build new networks and support them. I'll go work for them.
Advertising and Marketing
Right off the bat, a bunch of useless bullshit. I don't ever WANT to see an advertisement or marketing. I want to be able to find that shit if i'm looking for it, but I don't want any of it to just show up somewhere.
Audio and Media
More stupid "web 2.0" websites based around music and other crap. Internet radio has existed for well over a decade. I don't need another place to not find the music I want to hear. (Pandora sucks, Spotify sucks, Grooveshark sucks... it all sucks. I'll turn on Shoutcast or Last.FM Radio if I want random music that I kind of like)
Education, Recruitment and Jobs
First of all, if you didn't get a normal education, some web 2.0 shell of a company probably isn't going to educate you any better. We have Dice and Monster, and people who are competent at what they do will network and find jobs in person like normal.
Enterprise: Security, Storage, Collaboration, Databases
Finally, startups intended for technology. Too bad all of them suck. Most people i've seen who try to develop startups have not worked very long in tech so they design or implement poorly and if they survive it's from sheer luck. Most of these solutions are crap or unnecessary.
Finance, Payments and Ecommerce
Again, i'm pretty sure all the big contenders have already been created. It'd be interesting if they actually had a new way to deal with finance or ecommerce, but most of it's been done and there's not a lot of room for innovation.
Gaming, Virtual Worlds
Ok, here's something that actually has promise. Make a stupid game which is addicting and make a billion dollars like Rovio.
Social Networking and Collaboration
JUST. LET. IT. DIE.
Social networking is a fad. You know what the original social network was? AOL. Just let the shit die. God I hate social networks.
Travel and Transport
I guess there's still a few niche/boutique businesses you could start in this space. But if it's another "how to look up cheap flights" website, just kill yourself.
What I would like to see more of are startups that are intended on bettering mankind, or fixing a common problem, or pioneering a new technology (a *real* new technology, not just a new shitty website or NoSQL garbage tool nobody wants). Medical device startups are really cool. Startups that develop technology for the 3rd world are cool. I'm still waiting on somebody to build a company that just services new companies, giving them turn-key solutions to build new networks and support them. I'll go work for them.
Saturday, October 8, 2011
note to self for change management system
if a hack like extra privs is applied to a system to allow a dev to fix some issue in production or something, should be a system in place to automatically revoke privs after a given time. or specify a date/time range that the privs should be added, so you can specify "during maintenance window sunday 5am-9am developer Steve gets weblogic sudo access". as a matter of principle, all changes should be allowed to have date ranges applied to control when the changes happen. if the date/time starts but has no end, assume end time is indefinite. if start time and end time are the same, change is only applied once.
all account access should have defined end dates (for example, contractor steve has a 6 month contract, so all his access should have an expiry time set). BEFORE access is revoked email alerts will be generated for 2 weeks out, 1 week out, 3 days out, 1 day out before access expires to alert somebody before his access goes out the window. most configuration should not have expiry times because it's assumed if it is in config management it's meant to be there indefinitely, but for quick hacks where we know we don't want it to be there long we can set expiry times and will get alerts before it expires.
all account access should have defined end dates (for example, contractor steve has a 6 month contract, so all his access should have an expiry time set). BEFORE access is revoked email alerts will be generated for 2 weeks out, 1 week out, 3 days out, 1 day out before access expires to alert somebody before his access goes out the window. most configuration should not have expiry times because it's assumed if it is in config management it's meant to be there indefinitely, but for quick hacks where we know we don't want it to be there long we can set expiry times and will get alerts before it expires.
Monday, September 26, 2011
dumb network policies and systems practices
"jump server". just the term itself conjures up an image of "getting around" security or the network. it's a HACK. unless there's a big problem with your network, you should be able to allow access directly to the server you need to get to. connecting to one host just to connect to another host is retarded.
the only thing that is a potential benefit is that you're essentially forcing any network communication through one protocol (which can subsequently be circumvented on the jump server, depending) and (again, depending on the jump server) authenticating twice.
the bad things? it's incredibly, incredibly slow to transfer files. functionality with different protocols becomes broken. and you're circumventing the firewalls and network security. once you tunnel to the jump box it becomes much more difficult to determine who is connecting to where (after the jump box). and attacks on the internal network get much more interesting, not to mention if you escalate privs on the jump box you can piggyback any connection any other user is making from the jump box. not to mention you're forcing a new layer of complication onto your users so doing their job becomes more of a hassle - which almost by definition inspires people to break good convention for the sake of convenience. not to mention it's a waste of resources.
systems guys, don't jerk your users around. if there's a way you can get something done quicker, do it. for example: resizing logical partitions in a VM guest.
if your user wants 10GB more added to their work partition, get a procedure in place so you can do it live. rebooting the server should not be necessary for most admin tasks on a unix host.
don't believe me? read this blog post explaining how to extend an LVM volume while the box is still up. hey, now i don't have to wait a day to keep doing my work on a server which was allocated way fewer resources than it should have had!
the only thing that is a potential benefit is that you're essentially forcing any network communication through one protocol (which can subsequently be circumvented on the jump server, depending) and (again, depending on the jump server) authenticating twice.
the bad things? it's incredibly, incredibly slow to transfer files. functionality with different protocols becomes broken. and you're circumventing the firewalls and network security. once you tunnel to the jump box it becomes much more difficult to determine who is connecting to where (after the jump box). and attacks on the internal network get much more interesting, not to mention if you escalate privs on the jump box you can piggyback any connection any other user is making from the jump box. not to mention you're forcing a new layer of complication onto your users so doing their job becomes more of a hassle - which almost by definition inspires people to break good convention for the sake of convenience. not to mention it's a waste of resources.
systems guys, don't jerk your users around. if there's a way you can get something done quicker, do it. for example: resizing logical partitions in a VM guest.
if your user wants 10GB more added to their work partition, get a procedure in place so you can do it live. rebooting the server should not be necessary for most admin tasks on a unix host.
don't believe me? read this blog post explaining how to extend an LVM volume while the box is still up. hey, now i don't have to wait a day to keep doing my work on a server which was allocated way fewer resources than it should have had!
Monday, September 19, 2011
secure kickstarting of new linux servers
PXE is not secure. Not only does it rely on broadcast requests for a PXE server, it uses UDP and TFTP to serve files, thus removing any remaining security features. It also can't be used on WANs and typically requires admin infrastructure and VLANs set up wherever the boxes will be installed, so lots of admin overhead is required.
To get around these problems and provide a secure mechanism for remote install you should use a pre-built linux image on CD-ROM or floppy disk. Most servers today come with one or the other, and both provide enough space to include a kernel and tiny compressed initrd with barebones networking tools.
The kernel should obviously be the newest vanilla kernel possible, patched to include any relevant hardware support. The more vanilla the better as you can quickly pick up the newest released kernel and build it without needing to modify vendor-specific patches.
The initrd should probably be based on an LZMA-compressed mini filesystem or cpio archive. Usually something custom like squashfs works the best. You'll need to build busybox and the dropbear ssh client in a uClibc buildroot environment to make it as small as possible. Bundle an ssh key for the admin server along with DNS and IP information for the server (in case DNS resolution fails, it can try the last known good IP address(es)). It should try indefinitely to get an IP address, and once it gets one it should make an SSH connection. Once SSH connection is established it should download install scripts and execute them. All downloads happen through the secure SSH tunnel.
The initrd should also include proxytunnel and potentially openvpn or another UDP-based SSL tunnel so it can fall back to trying to connect through HTTP or udp port 53. You may need cntlm to work with NTLM proxies as proxytunnel's support does not seem to work for all versions of NTLM. Also, rsync should probably be included, or downloaded immediately once a network connection is established. Big apps can always be downloaded to a tmpfs partition later once the SSH connection is established. The initrd should also use a bootloader that can pass custom arguments to the initrd at boot time, for example to specify proxy or IP settings. It should probably support Web Proxy Autodiscovery Protocol for corporate environments.
If you have room, you may also want to bundle grub with the initrd. This can help you recover a system if it fails and it will allow you to install grub over whatever's currently on the hard drive. A good grub configuration to install would include options to boot from hard drive, CDROM or floppy, so as long as you have remote console you can reboot the box and select the install image from grub at boot time without needing to change BIOS settings.
Finally, each client image should be modified before burning/imaging to have custom ssh keys. You want each boot image to be able to be revoked from the admin server's login list, in case a boot image/disk is compromised or stolen. Granted, you're not giving this thing any more rights than access to your kickstart file tree, and that shouldn't have anything super confidential on it anyway. To customize the ssh keys per image you can have a script which generates new images, creates keys for each one and renames the image file to something specific to the machine. Match up the specific piece of hardware with this unique image file in your network's inventory database.
The kickstart server should have IP addresses dedicated only for the kickstart clients. An HTTP reverse proxy should be installed for ports 80 and 443 so that the client can use proxytunnel to connect to SSH through HTTP proxies. Optionally an openvpn daemon should be enabled on port 53 in case a client's firewall has an open outbound port 53. Each kickstart server's SSH daemon (listening only on the kickstart IPs) should use the same host keys so they can be copied to the initrd and you won't have to worry about the server host keys changing per box, messing up the initial connection from the clients. You could manage all the kickstart host keys independently but why complicate matters further?
In the end what you have is a client install CD or floppy which can boot up on a network, connect securely to a remote server, configure itself and follow setup instructions given by the remote server. It can even deliver detailed information about the host on boot-up so it can then download instructions specific to the machine type. When the machine boots up for the first time, select the CDROM or floppy and run it; as it boots it can install grub on the hard drive so you never even have to change the BIOS boot order. I recommend having the initrd grub config default to boot from hard disk; you can always manually select the remote install function and it's safer to default to booting from hard disk if the CD or floppy will stay in the machine.
Yes, this is a lot more maintenance than a simple PXE server. This is not intended for use in all environments. But if you need a truly secure, remote-accessible machine kickstarting solution, this one will do the job across all kinds of network types.
P.S. You can substitute ssh for a minimal HTTPS client and a copy of the server's certificate on the initrd so you don't have to rely on CAs. I personally don't trust 3rd party certificate authorities (as more and more evidence shows that states can snoop on SSL traffic without problems).
To get around these problems and provide a secure mechanism for remote install you should use a pre-built linux image on CD-ROM or floppy disk. Most servers today come with one or the other, and both provide enough space to include a kernel and tiny compressed initrd with barebones networking tools.
The kernel should obviously be the newest vanilla kernel possible, patched to include any relevant hardware support. The more vanilla the better as you can quickly pick up the newest released kernel and build it without needing to modify vendor-specific patches.
The initrd should probably be based on an LZMA-compressed mini filesystem or cpio archive. Usually something custom like squashfs works the best. You'll need to build busybox and the dropbear ssh client in a uClibc buildroot environment to make it as small as possible. Bundle an ssh key for the admin server along with DNS and IP information for the server (in case DNS resolution fails, it can try the last known good IP address(es)). It should try indefinitely to get an IP address, and once it gets one it should make an SSH connection. Once SSH connection is established it should download install scripts and execute them. All downloads happen through the secure SSH tunnel.
The initrd should also include proxytunnel and potentially openvpn or another UDP-based SSL tunnel so it can fall back to trying to connect through HTTP or udp port 53. You may need cntlm to work with NTLM proxies as proxytunnel's support does not seem to work for all versions of NTLM. Also, rsync should probably be included, or downloaded immediately once a network connection is established. Big apps can always be downloaded to a tmpfs partition later once the SSH connection is established. The initrd should also use a bootloader that can pass custom arguments to the initrd at boot time, for example to specify proxy or IP settings. It should probably support Web Proxy Autodiscovery Protocol for corporate environments.
If you have room, you may also want to bundle grub with the initrd. This can help you recover a system if it fails and it will allow you to install grub over whatever's currently on the hard drive. A good grub configuration to install would include options to boot from hard drive, CDROM or floppy, so as long as you have remote console you can reboot the box and select the install image from grub at boot time without needing to change BIOS settings.
Finally, each client image should be modified before burning/imaging to have custom ssh keys. You want each boot image to be able to be revoked from the admin server's login list, in case a boot image/disk is compromised or stolen. Granted, you're not giving this thing any more rights than access to your kickstart file tree, and that shouldn't have anything super confidential on it anyway. To customize the ssh keys per image you can have a script which generates new images, creates keys for each one and renames the image file to something specific to the machine. Match up the specific piece of hardware with this unique image file in your network's inventory database.
The kickstart server should have IP addresses dedicated only for the kickstart clients. An HTTP reverse proxy should be installed for ports 80 and 443 so that the client can use proxytunnel to connect to SSH through HTTP proxies. Optionally an openvpn daemon should be enabled on port 53 in case a client's firewall has an open outbound port 53. Each kickstart server's SSH daemon (listening only on the kickstart IPs) should use the same host keys so they can be copied to the initrd and you won't have to worry about the server host keys changing per box, messing up the initial connection from the clients. You could manage all the kickstart host keys independently but why complicate matters further?
In the end what you have is a client install CD or floppy which can boot up on a network, connect securely to a remote server, configure itself and follow setup instructions given by the remote server. It can even deliver detailed information about the host on boot-up so it can then download instructions specific to the machine type. When the machine boots up for the first time, select the CDROM or floppy and run it; as it boots it can install grub on the hard drive so you never even have to change the BIOS boot order. I recommend having the initrd grub config default to boot from hard disk; you can always manually select the remote install function and it's safer to default to booting from hard disk if the CD or floppy will stay in the machine.
Yes, this is a lot more maintenance than a simple PXE server. This is not intended for use in all environments. But if you need a truly secure, remote-accessible machine kickstarting solution, this one will do the job across all kinds of network types.
P.S. You can substitute ssh for a minimal HTTPS client and a copy of the server's certificate on the initrd so you don't have to rely on CAs. I personally don't trust 3rd party certificate authorities (as more and more evidence shows that states can snoop on SSL traffic without problems).
Tuesday, August 23, 2011
A tip for handling long downtime
So you push out a piece of code and it eats your live database. The site is broken. You need to take it down to repair the database. So you're going to keep your site down for how long? 30 minutes? 6 hours?
If you're trying to "fix" a database and you're keeping your site down until it's done, get a read-only copy of an old snapshot of the database + site code up. Put up a banner on all pages saying the site is under emergency maintenance so parts of the site are temporarily disabled.
This way your users get to continue using at least the read-only parts of the site and not all of your traffic goes out the window. Keep this in mind when developing the site too; not being able to update a hit counter in the database for a specific page should be a soft error, for example.
If you don't have a place to host this temporary database + site code, think about having such a place. Secondary/failover hosts would work at a time like this, or maybe your single host(s) need more capacity.
If you're trying to "fix" a database and you're keeping your site down until it's done, get a read-only copy of an old snapshot of the database + site code up. Put up a banner on all pages saying the site is under emergency maintenance so parts of the site are temporarily disabled.
This way your users get to continue using at least the read-only parts of the site and not all of your traffic goes out the window. Keep this in mind when developing the site too; not being able to update a hit counter in the database for a specific page should be a soft error, for example.
If you don't have a place to host this temporary database + site code, think about having such a place. Secondary/failover hosts would work at a time like this, or maybe your single host(s) need more capacity.
Wednesday, July 27, 2011
the internet is a collective waste of potential
What really makes me sick about the industry I work in (IT) is how a great majority of the really smart, creative people in it are working on the biggest wastes of time, money and energy on the planet.
Right now, somewhere in the Bay Area, someone is building a tool. In that tool is invested hundreds of thousands, possibly millions, of dollars of resources in human beings' time and the purchase of things to support them. Energy is being expended and people are spending their lives working on this tool. People spent years going to school to amass the knowledge to perform the tasks necessary to complete this tool.
That tool will be used to put funny phrases under pictures of cats on the internet.
Meanwhile, somewhere in sub-Saharan Africa, up to 11 million people may starve to death because they don't have food. When food prices soar (for example, when the USA meddles in food prices to achieve lower cost at the gas pump) and rivers are dry from drought, it hits the poorest the hardest. People's lives are lost as a result.
I'm not someone who fights for causes. I'm as hypocritical, cynical and lazy as most [American] people out there. But I get sick at the thought of the sheer staggering size of waste that is the internet and the big business taking advantage of it. There's untold fortunes of wealth being used to build digital empires who collectively do nothing to help any one or any thing. Sure, facebook creates this big website and eventually people can use it to create an event to rally protesters to a cause. But this was an unintended side-effect, and the end result from such "socializing" is (my guess anyway) ineffective. And before you claim that Facebook is the reason a government is overthrown somewhere in the middle east, please think long and hard about that. Revolution is the domain of people wanting to change something and deciding en masse to put their lives on the line for personal and political freedom. Facebook is the equivalent of a text-based telephone. Do you really think revolution couldn't have happened without a telephone?
My disgust at the waste of potential comes from my experiences in the Open Source community. I noticed how I could spend all my free time working on some cool new toy, only for there to be no real purpose to it. It would languish and if I finished it, nobody would really use it. I noticed how other people tended to spend their time on projects which were fun but produced nothing of value. So I stopped working on things I didn't need. Now I look around and all I can see is wasted effort.
Hackerspaces are one huge example of a waste of resources. Here you have a collective of very smart, motivated inventors who come together - to do what? Create 'makerbots'? Send balloons into space? Build arcade cabinets? WHAT'S THE POINT? You take that same group of people together and ask them to solve something truly difficult - like ways to keep people from dying from starvation in Somalia - and you'd have a real, tangible, valuable product.
Most of the people I know who work to change the world do so in person. I think that's partially because there's more of an immediate gratification and it doesn't take much to fly to Africa and get your hands dirty. But longer-term projects to increase the sustainability of a community are valuable too. You don't have to make huge changes in your life to spend your time working on something of value. All you have to do is change your focus. Do the same job, but pick which employer and project it is based on what kind of value it can produce.
Doing this for your own gratification is a selfish and unseemly objective, to me. If you just want to make yourself feel better you can volunteer at a local homeless shelter. This isn't intended to be a decision based on morals or for some goal to fix the way things are. The goal, to me, is to take the time you spend in life doing "work" and turn it into an investment in the future of the lives of living beings. Because you can spend your time doing nothing - really, it's not hard to do absolutely nothing - or you can spend it doing something which has a positive benefit outside of yourself or the company you work for.
I mean, it's a logical choice... help only yourself, or help yourself and others at the same time. In our society we do for others all the time because doing good things is cyclical. We can eat because we pay people to create and bring us food instead of stealing it (just ask warlords; it's not a sustainable business model). We don't get murdered because we don't murder people. And we hold doors for people so they too will hold a door for us. In this way, creating something of value which provides for other people will improve society - and on a bigger scale, the world. If you figure out a way to keep people from going hungry, we don't need to spend billions on foreign aid, which strengthens our economy is stronger. It's a simplistic but effective idea.
The next time you're considering job offers or personal projects to pick up, ask what the end result of the work is. If one answer is "put funny phrases on pictures of cats", and the other is "helping people", consider the second one. It may benefit you more in the end.
Right now, somewhere in the Bay Area, someone is building a tool. In that tool is invested hundreds of thousands, possibly millions, of dollars of resources in human beings' time and the purchase of things to support them. Energy is being expended and people are spending their lives working on this tool. People spent years going to school to amass the knowledge to perform the tasks necessary to complete this tool.
That tool will be used to put funny phrases under pictures of cats on the internet.
Meanwhile, somewhere in sub-Saharan Africa, up to 11 million people may starve to death because they don't have food. When food prices soar (for example, when the USA meddles in food prices to achieve lower cost at the gas pump) and rivers are dry from drought, it hits the poorest the hardest. People's lives are lost as a result.
I'm not someone who fights for causes. I'm as hypocritical, cynical and lazy as most [American] people out there. But I get sick at the thought of the sheer staggering size of waste that is the internet and the big business taking advantage of it. There's untold fortunes of wealth being used to build digital empires who collectively do nothing to help any one or any thing. Sure, facebook creates this big website and eventually people can use it to create an event to rally protesters to a cause. But this was an unintended side-effect, and the end result from such "socializing" is (my guess anyway) ineffective. And before you claim that Facebook is the reason a government is overthrown somewhere in the middle east, please think long and hard about that. Revolution is the domain of people wanting to change something and deciding en masse to put their lives on the line for personal and political freedom. Facebook is the equivalent of a text-based telephone. Do you really think revolution couldn't have happened without a telephone?
My disgust at the waste of potential comes from my experiences in the Open Source community. I noticed how I could spend all my free time working on some cool new toy, only for there to be no real purpose to it. It would languish and if I finished it, nobody would really use it. I noticed how other people tended to spend their time on projects which were fun but produced nothing of value. So I stopped working on things I didn't need. Now I look around and all I can see is wasted effort.
Hackerspaces are one huge example of a waste of resources. Here you have a collective of very smart, motivated inventors who come together - to do what? Create 'makerbots'? Send balloons into space? Build arcade cabinets? WHAT'S THE POINT? You take that same group of people together and ask them to solve something truly difficult - like ways to keep people from dying from starvation in Somalia - and you'd have a real, tangible, valuable product.
Most of the people I know who work to change the world do so in person. I think that's partially because there's more of an immediate gratification and it doesn't take much to fly to Africa and get your hands dirty. But longer-term projects to increase the sustainability of a community are valuable too. You don't have to make huge changes in your life to spend your time working on something of value. All you have to do is change your focus. Do the same job, but pick which employer and project it is based on what kind of value it can produce.
Doing this for your own gratification is a selfish and unseemly objective, to me. If you just want to make yourself feel better you can volunteer at a local homeless shelter. This isn't intended to be a decision based on morals or for some goal to fix the way things are. The goal, to me, is to take the time you spend in life doing "work" and turn it into an investment in the future of the lives of living beings. Because you can spend your time doing nothing - really, it's not hard to do absolutely nothing - or you can spend it doing something which has a positive benefit outside of yourself or the company you work for.
I mean, it's a logical choice... help only yourself, or help yourself and others at the same time. In our society we do for others all the time because doing good things is cyclical. We can eat because we pay people to create and bring us food instead of stealing it (just ask warlords; it's not a sustainable business model). We don't get murdered because we don't murder people. And we hold doors for people so they too will hold a door for us. In this way, creating something of value which provides for other people will improve society - and on a bigger scale, the world. If you figure out a way to keep people from going hungry, we don't need to spend billions on foreign aid, which strengthens our economy is stronger. It's a simplistic but effective idea.
The next time you're considering job offers or personal projects to pick up, ask what the end result of the work is. If one answer is "put funny phrases on pictures of cats", and the other is "helping people", consider the second one. It may benefit you more in the end.
Tuesday, July 12, 2011
Social Media Hoedown
Honestly. The fact that people haven't quite grasped that social media is all about fads is a little scary to me. You don't really have to a deliver a 'product' as anything other than a slick UI that allows people to play with each other. It's communication for entertainment's sake.
There is no point to using social media other than event invites, relationship status and pictures. That's the only useful features. Well, and contact information, but you've already got their contact info if they're really your friend.
I know, I know. You're going to defend your meek social interaction through comments and statuses and links and videos and all kinds of other nonsense. We've had forums for years. Some people make friends on forums but they're going to stick to the forums, not their facebook.
Google+ is just the latest reincarnation of the social media supersite. After them will be another. It doesn't matter to anyone what site they use as long as it's new and it's slicker. Why do they not care? Because there's no value in it besides the 3 things I mentioned above. As long as everyone they know is on the site, they'll use it.
So what's the killer product nobody's made yet? Quite simply it's a service that integrates every communication medium that people use. If (for example) you had a deal with every major wireless carrier to carry your apps and optimized communication through each varying protocol (SMS, SMTP, voice, HTTP, etc) to allow seamless and instant communication, there'd be no reason not to use it. If nobody ever had to sign up to a service because they were instantly and intrinsically enjoined with it there'd be nothing much else to sway one's opinion (besides Farmville). Nor would you have a choice, really.
And that's not to put down Farmville: Mindless games and apps have huge value for their market, but you don't need a social media network for that. Moreover, this seamless communication medium would allow you a framework to build apps which could reach anyone anywhere. Combine the dedicated carrier apps with a means to ship targeted "value-adding" applications and you've got one powerful, flexible social engine.
The way I see it is, all of these "sites" are based on some archaic notion that people should be using "the web" to get what it is they want. I disagree. I see every device with a network stack as simply a means to an end. The ends are basic: communication, information/entertainment and acquiring of goods/services. You can do all of those things with SMTP and POP3 if the sent and received messages are tailored for the application.
So let's unburden ourselves from the chains of some complex and limiting set of protocols and scripting languages. Nobody *needs* an app or a site. What we need are practical multifaceted interfaces to basic human interaction.
Google+ isn't going to give us that. The next site that replaces Google+ when the hoedown continues and the winds change, also won't give us that. But maybe once we've wasted enough time playing with our toys we'll finally get tired enough to just make tools that give us what we need and not always what we think we want.
There is no point to using social media other than event invites, relationship status and pictures. That's the only useful features. Well, and contact information, but you've already got their contact info if they're really your friend.
I know, I know. You're going to defend your meek social interaction through comments and statuses and links and videos and all kinds of other nonsense. We've had forums for years. Some people make friends on forums but they're going to stick to the forums, not their facebook.
Google+ is just the latest reincarnation of the social media supersite. After them will be another. It doesn't matter to anyone what site they use as long as it's new and it's slicker. Why do they not care? Because there's no value in it besides the 3 things I mentioned above. As long as everyone they know is on the site, they'll use it.
So what's the killer product nobody's made yet? Quite simply it's a service that integrates every communication medium that people use. If (for example) you had a deal with every major wireless carrier to carry your apps and optimized communication through each varying protocol (SMS, SMTP, voice, HTTP, etc) to allow seamless and instant communication, there'd be no reason not to use it. If nobody ever had to sign up to a service because they were instantly and intrinsically enjoined with it there'd be nothing much else to sway one's opinion (besides Farmville). Nor would you have a choice, really.
And that's not to put down Farmville: Mindless games and apps have huge value for their market, but you don't need a social media network for that. Moreover, this seamless communication medium would allow you a framework to build apps which could reach anyone anywhere. Combine the dedicated carrier apps with a means to ship targeted "value-adding" applications and you've got one powerful, flexible social engine.
The way I see it is, all of these "sites" are based on some archaic notion that people should be using "the web" to get what it is they want. I disagree. I see every device with a network stack as simply a means to an end. The ends are basic: communication, information/entertainment and acquiring of goods/services. You can do all of those things with SMTP and POP3 if the sent and received messages are tailored for the application.
So let's unburden ourselves from the chains of some complex and limiting set of protocols and scripting languages. Nobody *needs* an app or a site. What we need are practical multifaceted interfaces to basic human interaction.
Google+ isn't going to give us that. The next site that replaces Google+ when the hoedown continues and the winds change, also won't give us that. But maybe once we've wasted enough time playing with our toys we'll finally get tired enough to just make tools that give us what we need and not always what we think we want.
Monday, June 20, 2011
An Exercise In Fear: Why We Care About A Bunch Of 15 Year Old Retards
If you know anything about LulzSec it's that its members are (or were until recently) 4chan users, probably of the /b/ variety. Everything from their namesake to their cause to their speech and online habits pretty much says /b/tard.
I know /b/tards. Some of them are nice people (though most are dead inside). I basically get why they are on there, looking at posts of dead people and stupid unfunny cartoons and fag jokes and the inevitable hentai bestiality incest child rape porn. It's because they're bored. They're bored and so they go on the internet to find something to entertain them. And they find lots of other really bored people who like to look at shocking things and basically be idiots. That's the whole reason for 4chan. People are just horrible, and that's why that's there.
Not that i'm complaining. I grew up on the internet. I've looked at and read every horrible despicable thing the human imagination can think up. So i'm not harboring any grudge or ill will against these people. But I think i've gotten to the point where i'm sick of looking at boring, mindless, stupid shit. Unfortunately I can't completely ignore it because of LulzSec and Anonymous.
Why is the media giving so much attention to whatever crap LulzSec decides to announce? Today on Google News one of the top stories was the same story I had read on Hacker News: LulzSec decides to go on some "new mission" wherein they will attempt to deface government websites. Do you realize how completely boring that is? Do you know how much of a fucking loser you have to be to dedicate your valuable time to erasing a web page? The fact that just this announcement was news worthy makes one thing clear: people are fascinated and afraid of LulzSec.
The attacks carried out in their name have been many and they have infiltrated some very large and incredibly, stupidly insecure sites. The subsequent release of information from these sites has been absurdly large. On top of that, they command a sizeable botnet with which they DDoS whoever the fuck they feel like at the moment.
Are these attacks 'sophisticated'? No. There are many freely available tools which can be used to automate looking for and exploiting holes in public web applications and network services. Botnets are also not very hard to 'get'; most botnet owners don't properly secure their botnets and many can simply be social engineered to hand over control of the botnet. Most security researchers i've talked to don't find much difficulty in acquiring tens of thousands of nodes.
However, these tools are effective. Clearly there are many large sites with old holes waiting to be taken advantage of, and a DDoS is a very effective means of taking a host offline if you don't have the skill to penetrate it. Thus they can and do cause quite a bit of mischief. But why are we getting a news bulletin every time they do some damage?
Ultimately we are playing into their media-whoring hands. A couple of kids who are really bored are finding lots of attention (both positive and negative) in creating havoc on the internet. With each site taken down and subsequent press release they get more infamous and thus the next attack or announcement gets even more press. Online businesses cower in fear waiting for the next attack, and when it affects users directly (like the many gamers affected by their DDoSing) they are sucked into a whirlpool of hate directed at LulzSec - who, being 4chan trolls, revel in the fact that they could make such a large user base 'mad'.
Where do we go from here? Do we attempt to ignore the internet bullies in the hopes that they'll go away? Do we attack back and start a ridiculous arms race of morons flinging poo at each other? Should the media stop giving them a loudspeaker, or should it try instead to exercise some investigative journalism instead of parroting their exploits?
The truth is that people are simple. LulzSec will keep this up for a little longer, looking for big targets to attack to remain media darlings. We'll keep eating it up because people like celebrity gossip. But for the most part, everything will be the same as it always has been.
The difference is that now there's an 800lb gorilla in the room exposing the horribly lax security practices some of us know to be standard fair in the corporate IT world. Perhaps we'll get some tough new laws and a prison sentence to try to discourage this type of behavior in the future. If there's a positive effect of this whole episode it's that we can use LulzSec as bogey men to scare developers and sysadmins into doing their due diligence to keep their systems secure.
But then, when the lights go down and the circus is over, everything will go back to the way it was, and we'll sleep soundly until another bunch of bored teens decide to DDoS or exploit another service. Hopefully we can prevent this kind of thing from happening again by just not playing into the trolls' hands.
I know /b/tards. Some of them are nice people (though most are dead inside). I basically get why they are on there, looking at posts of dead people and stupid unfunny cartoons and fag jokes and the inevitable hentai bestiality incest child rape porn. It's because they're bored. They're bored and so they go on the internet to find something to entertain them. And they find lots of other really bored people who like to look at shocking things and basically be idiots. That's the whole reason for 4chan. People are just horrible, and that's why that's there.
Not that i'm complaining. I grew up on the internet. I've looked at and read every horrible despicable thing the human imagination can think up. So i'm not harboring any grudge or ill will against these people. But I think i've gotten to the point where i'm sick of looking at boring, mindless, stupid shit. Unfortunately I can't completely ignore it because of LulzSec and Anonymous.
Why is the media giving so much attention to whatever crap LulzSec decides to announce? Today on Google News one of the top stories was the same story I had read on Hacker News: LulzSec decides to go on some "new mission" wherein they will attempt to deface government websites. Do you realize how completely boring that is? Do you know how much of a fucking loser you have to be to dedicate your valuable time to erasing a web page? The fact that just this announcement was news worthy makes one thing clear: people are fascinated and afraid of LulzSec.
The attacks carried out in their name have been many and they have infiltrated some very large and incredibly, stupidly insecure sites. The subsequent release of information from these sites has been absurdly large. On top of that, they command a sizeable botnet with which they DDoS whoever the fuck they feel like at the moment.
Are these attacks 'sophisticated'? No. There are many freely available tools which can be used to automate looking for and exploiting holes in public web applications and network services. Botnets are also not very hard to 'get'; most botnet owners don't properly secure their botnets and many can simply be social engineered to hand over control of the botnet. Most security researchers i've talked to don't find much difficulty in acquiring tens of thousands of nodes.
However, these tools are effective. Clearly there are many large sites with old holes waiting to be taken advantage of, and a DDoS is a very effective means of taking a host offline if you don't have the skill to penetrate it. Thus they can and do cause quite a bit of mischief. But why are we getting a news bulletin every time they do some damage?
Ultimately we are playing into their media-whoring hands. A couple of kids who are really bored are finding lots of attention (both positive and negative) in creating havoc on the internet. With each site taken down and subsequent press release they get more infamous and thus the next attack or announcement gets even more press. Online businesses cower in fear waiting for the next attack, and when it affects users directly (like the many gamers affected by their DDoSing) they are sucked into a whirlpool of hate directed at LulzSec - who, being 4chan trolls, revel in the fact that they could make such a large user base 'mad'.
Where do we go from here? Do we attempt to ignore the internet bullies in the hopes that they'll go away? Do we attack back and start a ridiculous arms race of morons flinging poo at each other? Should the media stop giving them a loudspeaker, or should it try instead to exercise some investigative journalism instead of parroting their exploits?
The truth is that people are simple. LulzSec will keep this up for a little longer, looking for big targets to attack to remain media darlings. We'll keep eating it up because people like celebrity gossip. But for the most part, everything will be the same as it always has been.
The difference is that now there's an 800lb gorilla in the room exposing the horribly lax security practices some of us know to be standard fair in the corporate IT world. Perhaps we'll get some tough new laws and a prison sentence to try to discourage this type of behavior in the future. If there's a positive effect of this whole episode it's that we can use LulzSec as bogey men to scare developers and sysadmins into doing their due diligence to keep their systems secure.
But then, when the lights go down and the circus is over, everything will go back to the way it was, and we'll sleep soundly until another bunch of bored teens decide to DDoS or exploit another service. Hopefully we can prevent this kind of thing from happening again by just not playing into the trolls' hands.
Wednesday, June 1, 2011
devops/deveng is still a bad idea
I'm sure in 10 years people will eventually get just why it's a crappy idea to reinvent the wheel in Operations departments. Maybe somebody will finally standardize on a set of software configurations to deploy to manage an enterprise network. Could just be wishful thinking, but you never know.
Right now I work at a large non-profit as a small cog that's part of some big wheels. There's lots of people on my team that all basically do one or two specific jobs. In terms of how we go about accomplishing tasks it's incredibly inefficient. But that's the nature of big non-profits, I assume.
One example of how this "devops" idea fails came up recently. Somebody wanted a tool that rotates logs passed via standard-in. This is an old problem: you've got to keep processing logs but you can't afford the downtime of moving the log and restarting the service. So you open a pipe to a program which handles the juggling of logs for you. The easy choice is cronolog, a very old, stable open source tool.
When I asked if we could install it on all our RHEL boxes, I was told there was too much red tape involved with just getting a 'yum install' done, so I should just compile and install the software locally. (Yeah, i'm serious.) So after doing this I modify the script we need to use cronolog, test it, and it works great as expected.
Once i'm ready to push this out everywhere i'm told to hold off, that we need a better solution. Better than 'its done and working'? It turns out, if you use 3rd party software like this in our environment, three things happen:
1. Since we don't pay for the software there's no support contract. If there's no support, apparently it's too much trouble to find somebody who knows C in a building full of developers to support the software.
2. As a side-effect of #1, the security team can't get updates from a vendor and thus might not allow us to use the software as it might have unknown security holes.
3. We have to change our build standard to include the new software.
Now keep in mind, cronolog is shipped with RHEL. It just isn't installed on the machines. So getting it installed brings all sorts of red tape questions. What's their solution? Write something from scratch in-house. This of course is a lovely paradox, because:
1. We have to support the in-house solution now.
2. Nobody is going to audit the in-house solution for security holes.
Of course we don't have to change our build standard and it isn't a security conflict for one simple reason: Everyone ignores in-house software. That's right: the loophole to the regulation is to simply completely ignore auditing for internal tools. And there's an interesting point about devops/engops.
If you had actually paid for a product instead of developing it in-house or getting an open source solution, you would have assurance of both the attention to and patching for security holes, and you would just use it - there's no custom scripting or in-house wrapping required. Not only does it speed up your work to pay for it but it makes it more secure (in a fashion) and prevents you from having to spend development cycles.
The next time you decide to hire a 'devop' or 'deveng', consider how much money and time you'd save if you just spent a couple hundred or grand on completed, supported tools. (And as an aside, try not to allow gaping loopholes in logic like the ones pointed out above)
Edit: my mistake, cronolog is not shipped with RHEL (why I do not know). I ended up writing my own as it's much simpler than I originally thought. However, getting them to support 'logrotate' on solaris is looking like a bigger challenge...
Right now I work at a large non-profit as a small cog that's part of some big wheels. There's lots of people on my team that all basically do one or two specific jobs. In terms of how we go about accomplishing tasks it's incredibly inefficient. But that's the nature of big non-profits, I assume.
One example of how this "devops" idea fails came up recently. Somebody wanted a tool that rotates logs passed via standard-in. This is an old problem: you've got to keep processing logs but you can't afford the downtime of moving the log and restarting the service. So you open a pipe to a program which handles the juggling of logs for you. The easy choice is cronolog, a very old, stable open source tool.
When I asked if we could install it on all our RHEL boxes, I was told there was too much red tape involved with just getting a 'yum install' done, so I should just compile and install the software locally. (Yeah, i'm serious.) So after doing this I modify the script we need to use cronolog, test it, and it works great as expected.
Once i'm ready to push this out everywhere i'm told to hold off, that we need a better solution. Better than 'its done and working'? It turns out, if you use 3rd party software like this in our environment, three things happen:
1. Since we don't pay for the software there's no support contract. If there's no support, apparently it's too much trouble to find somebody who knows C in a building full of developers to support the software.
2. As a side-effect of #1, the security team can't get updates from a vendor and thus might not allow us to use the software as it might have unknown security holes.
3. We have to change our build standard to include the new software.
Now keep in mind, cronolog is shipped with RHEL. It just isn't installed on the machines. So getting it installed brings all sorts of red tape questions. What's their solution? Write something from scratch in-house. This of course is a lovely paradox, because:
1. We have to support the in-house solution now.
2. Nobody is going to audit the in-house solution for security holes.
Of course we don't have to change our build standard and it isn't a security conflict for one simple reason: Everyone ignores in-house software. That's right: the loophole to the regulation is to simply completely ignore auditing for internal tools. And there's an interesting point about devops/engops.
If you had actually paid for a product instead of developing it in-house or getting an open source solution, you would have assurance of both the attention to and patching for security holes, and you would just use it - there's no custom scripting or in-house wrapping required. Not only does it speed up your work to pay for it but it makes it more secure (in a fashion) and prevents you from having to spend development cycles.
The next time you decide to hire a 'devop' or 'deveng', consider how much money and time you'd save if you just spent a couple hundred or grand on completed, supported tools. (And as an aside, try not to allow gaping loopholes in logic like the ones pointed out above)
Edit: my mistake, cronolog is not shipped with RHEL (why I do not know). I ended up writing my own as it's much simpler than I originally thought. However, getting them to support 'logrotate' on solaris is looking like a bigger challenge...
Monday, March 28, 2011
the real problem and real solution for https
(for background see this hackerne.ws post)
Really, the problem here is nobody trusts the CAs. (it is kind of difficult to just assume 650 different CAs will all maintain 100% security over their cert-generation procedures)
What people seem to be looking for here is an early warning system for possible mitm *after* an initial "trusted" connection (which you can never tell for sure because even if you're browsing from a secure LAN the destination site could have been compromised, or the nameserver).
The best solution to this problem is one that will fix the problem of a trusted initial connection without relying on CAs. Of course this is difficult. But let's simplify it a bit first.
Assume for a second the internet is only two nodes: me at a desk in one room and an ethernet cable that goes into an adjoining room. I don't know who's in that other room - but I want to connect to the other end and do my banking. So how do I know if it's my bank in there or a stranger?
What do I have that I can trust right now? I have my computer, which contains (among other things) an operating system and a web browser. We won't discuss how it got there, because to discuss the origin of things ad infinitum will leave us with religion and that never solves anything.
My browser already ships with trusted information: the certificates of trusted authorities. But it's trying too hard to make everybody happy, sucking up as many different sources of trusted information as possible. In ten years we might have 6,500 trusted CAs. This won't end well.
I just want to tell if the person in the room is really my bank. What can I do to be sure?
I can ask the person in the room a secret only my bank knows, for one thing. I could have told my bank in person or over the phone, via a letter, or probably whenever I joined my bank for the first time. This would be something only I would know, not something that came with my computer.
Another way I can be sure is if I verify all the steps in the process to connect to my bank. If someone was trying to fake being the bank, they would probably have a different IP address and a different certificate. If I had a print-out with a hash of those and punched it into my browser before I made the connection, my browser would know for sure this is the bank's information and reject the connection if anything was different.
What we have here essentially is static configuration and CHAP. Combine this with PKI and you have three separate pieces of information which must be spoofed in order to successfully compromise the connection. If the person in the other room knows all of this, the bank is surely compromised anyway.
Note that this is merely a way to establish a "secure first connection". You will still need to at least have specific information about the site you're connecting to (IP and cert hash) and to be more sure of the connection have already registered and set a secret only you and the other person know.
There are technical limitations with some of this design, but if these can be worked around you should have a fairly secure connection. If you want to sleep better about PKI there should be less CAs (more like 10 instead of 650).
Really, the problem here is nobody trusts the CAs. (it is kind of difficult to just assume 650 different CAs will all maintain 100% security over their cert-generation procedures)
What people seem to be looking for here is an early warning system for possible mitm *after* an initial "trusted" connection (which you can never tell for sure because even if you're browsing from a secure LAN the destination site could have been compromised, or the nameserver).
The best solution to this problem is one that will fix the problem of a trusted initial connection without relying on CAs. Of course this is difficult. But let's simplify it a bit first.
Assume for a second the internet is only two nodes: me at a desk in one room and an ethernet cable that goes into an adjoining room. I don't know who's in that other room - but I want to connect to the other end and do my banking. So how do I know if it's my bank in there or a stranger?
What do I have that I can trust right now? I have my computer, which contains (among other things) an operating system and a web browser. We won't discuss how it got there, because to discuss the origin of things ad infinitum will leave us with religion and that never solves anything.
My browser already ships with trusted information: the certificates of trusted authorities. But it's trying too hard to make everybody happy, sucking up as many different sources of trusted information as possible. In ten years we might have 6,500 trusted CAs. This won't end well.
I just want to tell if the person in the room is really my bank. What can I do to be sure?
I can ask the person in the room a secret only my bank knows, for one thing. I could have told my bank in person or over the phone, via a letter, or probably whenever I joined my bank for the first time. This would be something only I would know, not something that came with my computer.
Another way I can be sure is if I verify all the steps in the process to connect to my bank. If someone was trying to fake being the bank, they would probably have a different IP address and a different certificate. If I had a print-out with a hash of those and punched it into my browser before I made the connection, my browser would know for sure this is the bank's information and reject the connection if anything was different.
What we have here essentially is static configuration and CHAP. Combine this with PKI and you have three separate pieces of information which must be spoofed in order to successfully compromise the connection. If the person in the other room knows all of this, the bank is surely compromised anyway.
Note that this is merely a way to establish a "secure first connection". You will still need to at least have specific information about the site you're connecting to (IP and cert hash) and to be more sure of the connection have already registered and set a secret only you and the other person know.
There are technical limitations with some of this design, but if these can be worked around you should have a fairly secure connection. If you want to sleep better about PKI there should be less CAs (more like 10 instead of 650).
Thursday, March 3, 2011
universal webapp architecture
I think its funny when people have to redesign their architecture. It's like, what, you couldn't scale? Just throw some hardware at it.
Your framework had limits? Why'd you reuse it in a way that would eventually run into bottlenecks? Didn't you learn how the whole thing worked before deciding to implement it?
Your code is slowing down and bloating up, so you figure a redesign is easier than optimization? Congratulations, you've fallen victim to the worst thing you can do when faced with performance problems: throwing the baby out with the bath water.
Just optimize your current crappy system and add layers to make it get cheap performance, scale horizontally and get on with business. Redesign is usually a waste of business resources.
Your framework had limits? Why'd you reuse it in a way that would eventually run into bottlenecks? Didn't you learn how the whole thing worked before deciding to implement it?
Your code is slowing down and bloating up, so you figure a redesign is easier than optimization? Congratulations, you've fallen victim to the worst thing you can do when faced with performance problems: throwing the baby out with the bath water.
Just optimize your current crappy system and add layers to make it get cheap performance, scale horizontally and get on with business. Redesign is usually a waste of business resources.
Monday, February 28, 2011
holy ipv6, batman
i just realized my server is passing ipv6 traffic through ssh for my clients. i enabled ipv6 on a windows laptop (netsh interface ipv6 install, not ipv6 install) and told putty to connect to an ipv4 address, tunneling a dynamic socks proxy on both ipv4 and ipv6 to my remote server (which has an average ipv4 and ipv6 network with 6to4 set up for whatever my isp's 6to4 gateway is). then set up my browser to use the dynamic forward port as its proxy, and hit 'ipv6.google.com'.
BOOM. page comes up. ipv6.google.com<->my6to4box<->putty<->windows<->browser. it just friggin works! http://test-ipv6.com/ says it is indeed the ipv6 addy of my6to4box that it's seeing, so ipv4 and ipv6 really are being tunneled automatically. this is pretty cool.
NOW WHY ISN'T EVERYONE DOING THIS YET?!
(what's funny is i only tested this today because xkcd kindly informed its readership that they finally fucking set up an AAAA record which i'd been complaining about for over a year)
BOOM. page comes up. ipv6.google.com<->my6to4box<->putty<->windows<->browser. it just friggin works! http://test-ipv6.com/ says it is indeed the ipv6 addy of my6to4box that it's seeing, so ipv4 and ipv6 really are being tunneled automatically. this is pretty cool.
NOW WHY ISN'T EVERYONE DOING THIS YET?!
(what's funny is i only tested this today because xkcd kindly informed its readership that they finally fucking set up an AAAA record which i'd been complaining about for over a year)
it had to happen sometime
i predicted this years ago when my last company first thought of migrating to a 3rd party to host their mail cheaper. i don't remember if they ever implemented a strategy to back up the mail remotely, though i do remember for a while the "beta testers" had their mail sent both to exchange and gmail.
point is: don't tell me just because a company is large or reputable that the basic procedures of any IT department should be ignored. if you have data and it's important you need to keep a backup, and you need to be able to verify the backup. if you can't put your hands to a redundant offsite copy of your data it's going to vanish eventually.
to all the people that lost their mail: i feel for you. i've lost data before too because i didn't back it up. however, we do learn that most of our correspondence's history is unnecessary. nice to have "in case of emergencies", but unnecessary. do i really need those mailing list threads from 3 years ago? will that website confirmation really be necessary down the road? nah. the personal messages passed between family and friends may be missed, but i've never really "gone down memory lane" before and doubt i would in the future.
this is also the risk you take when you rely on web-only email. luckily i believe gmail allows any user to make an offline copy of their mail, but some services like yahoo and hotmail do not (unless you pay). i'd like to see a push for competing providers to mirror other providers' datasets for redundancy but that might just make them less important in the end.
point is: don't tell me just because a company is large or reputable that the basic procedures of any IT department should be ignored. if you have data and it's important you need to keep a backup, and you need to be able to verify the backup. if you can't put your hands to a redundant offsite copy of your data it's going to vanish eventually.
to all the people that lost their mail: i feel for you. i've lost data before too because i didn't back it up. however, we do learn that most of our correspondence's history is unnecessary. nice to have "in case of emergencies", but unnecessary. do i really need those mailing list threads from 3 years ago? will that website confirmation really be necessary down the road? nah. the personal messages passed between family and friends may be missed, but i've never really "gone down memory lane" before and doubt i would in the future.
this is also the risk you take when you rely on web-only email. luckily i believe gmail allows any user to make an offline copy of their mail, but some services like yahoo and hotmail do not (unless you pay). i'd like to see a push for competing providers to mirror other providers' datasets for redundancy but that might just make them less important in the end.
Thursday, February 17, 2011
why your company needs to use 2-factor security now
so this story points out one of the pink elephants in corporate security: accounts are often left open after employees leave the company. the other pink elephant they won't talk about is shared accounts.
i can't tell you how many people's passwords have been told to me by users while i was an admin. if i wasn't creating them an account i'd just be troubleshooting something and they'd just give me their password, like it was a free coupon. aside from this there's co-workers who often share passwords to get access to files locked away by strong permissions or to work on the same project for brief periods, or just for the hell of it. they don't really care about security and they don't think anyone's going to abuse the trust. but there's very little trust in real security.
so here you have ex-employees who potentially know several other employees' passwords. if all you use is a password for, say, VPN and e-mail, your company has been owned. there are case studies in how you will get hacked just by pilfering an e-mail account. so clearly, this shit needs to be locked down. you can't just rely on a password - you need another authentication factor.
don't want to pay for expensive RSA SecurID? that's fine; use VeriSign's free OpenID provider and a $5 hardware authenticator from PayPal (or a $30 version from VeriSign) and you have effective, open 2-factor authentication.
is it possible to steal someone's authenticator and get away with a similar hack? of course. but it's a lot easier for someone just to log in using someone else's credentials and escalate to wherever they want to be.
i can't tell you how many people's passwords have been told to me by users while i was an admin. if i wasn't creating them an account i'd just be troubleshooting something and they'd just give me their password, like it was a free coupon. aside from this there's co-workers who often share passwords to get access to files locked away by strong permissions or to work on the same project for brief periods, or just for the hell of it. they don't really care about security and they don't think anyone's going to abuse the trust. but there's very little trust in real security.
so here you have ex-employees who potentially know several other employees' passwords. if all you use is a password for, say, VPN and e-mail, your company has been owned. there are case studies in how you will get hacked just by pilfering an e-mail account. so clearly, this shit needs to be locked down. you can't just rely on a password - you need another authentication factor.
don't want to pay for expensive RSA SecurID? that's fine; use VeriSign's free OpenID provider and a $5 hardware authenticator from PayPal (or a $30 version from VeriSign) and you have effective, open 2-factor authentication.
is it possible to steal someone's authenticator and get away with a similar hack? of course. but it's a lot easier for someone just to log in using someone else's credentials and escalate to wherever they want to be.
Wednesday, February 16, 2011
most tech stays the same
Sometimes I get a little scared of the future. I'm not exactly a luddite but i'm pretty close to it considering i'm supposed to be some computer-whiz hacker guy. Most of my hardware is years old by the time I buy it and I keep it around until it falls apart. My software... well, i'm a Slackware user, let's leave it at that. I still don't use any programming languages other than Perl and C. And apparently I can still make a very good living like this.
That's the funny thing i'm realizing... While we always have to adapt to some newfangled apparatus, in general everything is the same. We're still using computers based on a friggin' 26-year-old processor. We're still using the operating systems designed for them. We're still programming in and using the products of languages just as old and older. While the fashion may change, at the end of the day we're still wearing pants, and still writing code that doesn't sanitize input.
Security isn't any better than it used to be. Firewalls are still relatively dumb beasts (do you know any large company that does layer 7 filtering that isn't just proxies?). Anti-virus software is about as accurate against modern obscure trojans as they used to be. It's possible that web application writers are even less intelligent than they used to be, seeing as their output is the rife fodder for a new generation of penetration testers. Hell, we're still using passwords for root accounts. (We still HAVE root accounts!?)
Probably the one thing that is quickly changing is the barrier to entry. It used to be you'd pay a hundred bucks or more for a menial dedicated server. Now four dollars US will get you 15 gigs of space, a gig of ram and 200 gigs of bandwidth on a 100mbit shared pipe. PER MONTH! You spread that hundred bucks out and you've got an impressive server farm by 1999's standards. And computers in general keep getting cheaper, meaning more kids can get their hands on a netbook and start hacking away. Pretty soon you'll see a new start-up sector dedicated to youth and college kids, who join forces and collaborate - not to write free software like Linux, but free apps for Android and web development farms.
And still, the tech remains mostly the same. Web apps (we used to call them 'cgi scripts') and their backend counterparts interfacing with relational and non-relational databases (we used to call them 'BerkeleyDB') just become the modern fashion of development, with mobile platforms being the meatiest new market to squeeze some bucks out of. But all the old standards will still be there. Some guy will still be assembling a C library for some high-speed low-latency backend app to interface with his Clojure mobile app. The devs will write some Python or Perl script to get their app staged on their workstations and hand it off to the sysadmins to run in production (with minor edits, of course). Security goons will continue to scan their networks and sites for unexplored chasms of potential vulnerability.
We'll never really reach a utopia where modern technology becomes re-invented and everything is magically better. Everything pretty much stays the same.
That's the funny thing i'm realizing... While we always have to adapt to some newfangled apparatus, in general everything is the same. We're still using computers based on a friggin' 26-year-old processor. We're still using the operating systems designed for them. We're still programming in and using the products of languages just as old and older. While the fashion may change, at the end of the day we're still wearing pants, and still writing code that doesn't sanitize input.
Security isn't any better than it used to be. Firewalls are still relatively dumb beasts (do you know any large company that does layer 7 filtering that isn't just proxies?). Anti-virus software is about as accurate against modern obscure trojans as they used to be. It's possible that web application writers are even less intelligent than they used to be, seeing as their output is the rife fodder for a new generation of penetration testers. Hell, we're still using passwords for root accounts. (We still HAVE root accounts!?)
Probably the one thing that is quickly changing is the barrier to entry. It used to be you'd pay a hundred bucks or more for a menial dedicated server. Now four dollars US will get you 15 gigs of space, a gig of ram and 200 gigs of bandwidth on a 100mbit shared pipe. PER MONTH! You spread that hundred bucks out and you've got an impressive server farm by 1999's standards. And computers in general keep getting cheaper, meaning more kids can get their hands on a netbook and start hacking away. Pretty soon you'll see a new start-up sector dedicated to youth and college kids, who join forces and collaborate - not to write free software like Linux, but free apps for Android and web development farms.
And still, the tech remains mostly the same. Web apps (we used to call them 'cgi scripts') and their backend counterparts interfacing with relational and non-relational databases (we used to call them 'BerkeleyDB') just become the modern fashion of development, with mobile platforms being the meatiest new market to squeeze some bucks out of. But all the old standards will still be there. Some guy will still be assembling a C library for some high-speed low-latency backend app to interface with his Clojure mobile app. The devs will write some Python or Perl script to get their app staged on their workstations and hand it off to the sysadmins to run in production (with minor edits, of course). Security goons will continue to scan their networks and sites for unexplored chasms of potential vulnerability.
We'll never really reach a utopia where modern technology becomes re-invented and everything is magically better. Everything pretty much stays the same.
Friday, February 11, 2011
encrypted message passing with plausible deniability
so, RedPhone is encrypted VoIP with an intermediary to pass the connection off. with this it's possible for a foreign power to force you to reveal the nature of the call. their other product, TextSecure, offers little in the way of "encrypted SMS" because they use OTR which is effectively pointless with a man in the middle. however, if you wanted to transmit a message with plausible deniability, you could do it like this.
create a store-and-forward service for anonymous message pushing and pulling. make all messages encrypted and have a set size. something decent enough for a small compressed media file. every time you connect you push an encrypted message of this size and you pull one of the same size. every single time. time between each successful communication should be something like every half hour or every hour.
the result should be that nobody can tell if you were actually sending or receiving anything because it always sends and receives something, all the time, regardless of whether you needed to do anything. you could also have it encrypt like a matroska file so you can encode multiple files, and possibly even an encryption package which only decrypts parts of the payload as determined by the encryption term used, so if you used one decryption term it decrypts an MP3 file, and another decryption term reveals secret documents. plausible deniability!
create a store-and-forward service for anonymous message pushing and pulling. make all messages encrypted and have a set size. something decent enough for a small compressed media file. every time you connect you push an encrypted message of this size and you pull one of the same size. every single time. time between each successful communication should be something like every half hour or every hour.
the result should be that nobody can tell if you were actually sending or receiving anything because it always sends and receives something, all the time, regardless of whether you needed to do anything. you could also have it encrypt like a matroska file so you can encode multiple files, and possibly even an encryption package which only decrypts parts of the payload as determined by the encryption term used, so if you used one decryption term it decrypts an MP3 file, and another decryption term reveals secret documents. plausible deniability!
a week with a robot
sometime last week i bought my first Android phone. i've never owned an iPhone so i can't compare it to that, but i have owned S60 and Windows Mobile phones, so we can start there. this is the LG Optimus V from Virgin Mobile.
pretty much every carrier except AT&T has a version of this phone now, and i personally think this one looks the nicest. you can probably get it for free on another carrier by getting a contract, but the phone only costs $150 without a contract, making it (afaik) the cheapest android phone on the market. with month to month plans with unlimited data and texting starting at $25, this is the cheapest smartphone and plan in the united states. but since the price is so low, there have been some problems.
the battery sucks. the damn thing could hardly stay on for 8 hours after the first charge. after killing the battery 3 times the battery slowly started to gain some extra life (after about 4 or 5 days). some forum browsing had me try a few tricks like turning on airplane mode or turning off data, and this has something to do with the "cell standby" battery-sucking thing in the phone's battery use screen. i haven't measured the battery life since the last charge, but if the phone is alive when i wake up in the morning it will have lasted just past 12 hours on standby. this is HORRIBLE standby battery life for any modern smartphone, but to be honest if i can just make it stay alive for 3/4 of a day i will live with the crappy battery life. (this is all with gps, bluetooth, wifi and google syncing turned off and brightness set to the lowest setting)
the keyboard (both android keyboard and swipe) lock up randomly in some apps like the browser. like 4 times in a day. i installed the "gingerbread" keyboard from android 3.0. it doesn't do swype (which is kind of annoying) but at least it isn't freezing up all day now. it seems like portrait typing is a lot more accurate than landscape which is kind of the opposite of how i thought typing accuracy would go.
google navigation is *amazing*. it's like i finally have a real car gps. even if you're not looking at it, you can listen to it and follow the directions just fine. kind of hard to hear it over music in the car but i'll figure out a way around that eventually.
the phone as a whole is very fast and i never see anything lag or skip really. considering this is a "slow" 600mhz processor i'm kind of impressed, and it's definitely worth the money speed-wise.
what the hell is with Android that you can't close apps? there's a way to "force stop" applications in android, but it's just dumb to me to clutter up your OS with applications you aren't using. some of the apps when you "background" them don't do anything, but some definitely do, robbing you of battery life and using data. just let me close the damn apps android. it makes me feel better.
the "market" feels just like s60 app downloading: a bunch of shoddy, not-quite-trustworthy developers making useless apps for free and requiring you to fork over access to practically the entire phone to do something like download news updates.
it's difficult to do something trivial like just set a black background or select an MP3 as a ringtone. they were nice to provide a tutorial for things like Swype but everything else is a clunky learning process. i think at least once there was some simple action i wanted to do but i had no idea how to do it in android and had to go digging around in google and knowledge bases to no avail.
the camera and video are pretty decent. there's no flash of course, but beggers can't be choosers with a cheap thing like this. it's nice to have 30fps video again.
it's also nice to have a standard 3.5" headphone jack and mini-usb connector for once. i've settled for proprietary connectors most of my life and now i can actually go get a giant microsd card and listen to music with a normal headset. of course there's no media buttons anywhere on the device, but there's a headset with media controls that i might be able to get for it, or make for it.
oh yeah, and i was right about touchscreens: they're shit for texting while driving. you have to look down all the time for what you're typing and correct it, unlike a real keypad. with swype i'd have a pretty good chance of just getting words out a few letters at a time, but since i have to use the gingerbread keyboard there's a high likelihood i'll screw up and it'll take a lot longer to finish just one word.
they tell you some crap in the reviews about "it doesn't have tethering or flash." you can use a browser that transcodes flash to html 5 (skyfire or something), and the tethering is just hidden; a free app download will "push the button" for you and enable it. you can also root the phone but so far i have no need to do anything that requires rooting.
in general i think the Android OS is immature and not as useful as something like s60 or sony ericsson. they're still behind the curve, trying to provide the same user experience that has sat there for years in other more "common" devices. hell, just the "main menu" is awful: 100 different apps crammed into one screen and if you want to separate them you have to do it yourself on one of the 4 or 5 panes of the main workspace screen.
i'm going to terminate my AT&T account and go with just this phone and virgin mobile. the price can't be beat. and now that i have an android phone, all of my texts and calls both come from and to my google voice number, so it appears to everyone that it's "my real number." no more of this 2-phone-numbers crap people had to figure out with me before. it's beautiful.
pretty much every carrier except AT&T has a version of this phone now, and i personally think this one looks the nicest. you can probably get it for free on another carrier by getting a contract, but the phone only costs $150 without a contract, making it (afaik) the cheapest android phone on the market. with month to month plans with unlimited data and texting starting at $25, this is the cheapest smartphone and plan in the united states. but since the price is so low, there have been some problems.
the battery sucks. the damn thing could hardly stay on for 8 hours after the first charge. after killing the battery 3 times the battery slowly started to gain some extra life (after about 4 or 5 days). some forum browsing had me try a few tricks like turning on airplane mode or turning off data, and this has something to do with the "cell standby" battery-sucking thing in the phone's battery use screen. i haven't measured the battery life since the last charge, but if the phone is alive when i wake up in the morning it will have lasted just past 12 hours on standby. this is HORRIBLE standby battery life for any modern smartphone, but to be honest if i can just make it stay alive for 3/4 of a day i will live with the crappy battery life. (this is all with gps, bluetooth, wifi and google syncing turned off and brightness set to the lowest setting)
the keyboard (both android keyboard and swipe) lock up randomly in some apps like the browser. like 4 times in a day. i installed the "gingerbread" keyboard from android 3.0. it doesn't do swype (which is kind of annoying) but at least it isn't freezing up all day now. it seems like portrait typing is a lot more accurate than landscape which is kind of the opposite of how i thought typing accuracy would go.
google navigation is *amazing*. it's like i finally have a real car gps. even if you're not looking at it, you can listen to it and follow the directions just fine. kind of hard to hear it over music in the car but i'll figure out a way around that eventually.
the phone as a whole is very fast and i never see anything lag or skip really. considering this is a "slow" 600mhz processor i'm kind of impressed, and it's definitely worth the money speed-wise.
what the hell is with Android that you can't close apps? there's a way to "force stop" applications in android, but it's just dumb to me to clutter up your OS with applications you aren't using. some of the apps when you "background" them don't do anything, but some definitely do, robbing you of battery life and using data. just let me close the damn apps android. it makes me feel better.
the "market" feels just like s60 app downloading: a bunch of shoddy, not-quite-trustworthy developers making useless apps for free and requiring you to fork over access to practically the entire phone to do something like download news updates.
it's difficult to do something trivial like just set a black background or select an MP3 as a ringtone. they were nice to provide a tutorial for things like Swype but everything else is a clunky learning process. i think at least once there was some simple action i wanted to do but i had no idea how to do it in android and had to go digging around in google and knowledge bases to no avail.
the camera and video are pretty decent. there's no flash of course, but beggers can't be choosers with a cheap thing like this. it's nice to have 30fps video again.
it's also nice to have a standard 3.5" headphone jack and mini-usb connector for once. i've settled for proprietary connectors most of my life and now i can actually go get a giant microsd card and listen to music with a normal headset. of course there's no media buttons anywhere on the device, but there's a headset with media controls that i might be able to get for it, or make for it.
oh yeah, and i was right about touchscreens: they're shit for texting while driving. you have to look down all the time for what you're typing and correct it, unlike a real keypad. with swype i'd have a pretty good chance of just getting words out a few letters at a time, but since i have to use the gingerbread keyboard there's a high likelihood i'll screw up and it'll take a lot longer to finish just one word.
they tell you some crap in the reviews about "it doesn't have tethering or flash." you can use a browser that transcodes flash to html 5 (skyfire or something), and the tethering is just hidden; a free app download will "push the button" for you and enable it. you can also root the phone but so far i have no need to do anything that requires rooting.
in general i think the Android OS is immature and not as useful as something like s60 or sony ericsson. they're still behind the curve, trying to provide the same user experience that has sat there for years in other more "common" devices. hell, just the "main menu" is awful: 100 different apps crammed into one screen and if you want to separate them you have to do it yourself on one of the 4 or 5 panes of the main workspace screen.
i'm going to terminate my AT&T account and go with just this phone and virgin mobile. the price can't be beat. and now that i have an android phone, all of my texts and calls both come from and to my google voice number, so it appears to everyone that it's "my real number." no more of this 2-phone-numbers crap people had to figure out with me before. it's beautiful.
Wednesday, February 9, 2011
caching the internets
say you wanted a project to provide a small amount of internet bandwidth to a large number of users (say, an african town via a satellite link, or a few blocks in egypt with a t1 line). you'd need some serious caching, access control and traffic shaping to ensure it kept working.
first of all you have to determine capacity and limit use. you can't just allow a thousand retards to start torrenting every season of House from a fucking satlink. total outbound and inbound traffic must be regulated to allow a usable number of connections at a usable bandwidth (so for the sake of argument, slightly slower than a 56k modem). no more than $BANDWIDTH/$MODEM_SPEED streams at a time with a timeout (tcp keepalives disabled). big syn backlog buffer to wait for an available slot while trying to connect.
also you need to shape a couple protocols for less latency. ssh gets higher priority, but up to a certain amount of bandwidth... if an ssh session uses more than 5 megabytes of traffic, somebody is fucking scp'ing so kill that connection (not that they can't get around that with an rsync loop). SIP and some other protocols also low latency.
squid or some other more efficient proxy with HUGE cache store at the uplink point. dns proxy as well. also if it's not too much trouble, a pop3/imap caching server, and definitely an (authenticated) smtp relay to pass messages when the link is available again. run an ad-blocking thing in the proxy to strip out all unnecessary garbage content which would just suck up bandwidth otherwise. if you want to get retarded, block all streaming content. if you want to get SUPER retarded, limit allowed content to only a few MIME types (text/html, image/jpeg, text/plain, etc). allow for whitelists of commonly-hit, cacheable content among the stuff that's blocked.
it could be that additional routes are added to this one tiny uplink as the network grows. add a new caching server with the same tweaks at each router so that cache is kept at each subnet and also the main uplink point. this helps reduce bandwidth used getting to the uplink point itself, allowing your intermediary routes to also be weak/small.
what may also help is an additional proxy at the other side of the satlink which compresses content before being sent to the client pipe; kind of like Opera, this would (for example) compress images on a network which had fast internet access and then send across the slow satlink to the caching stuff, further decreasing delivery time and bandwidth.
SSL makes all this caching obviously more hairy and bandwidth demands more intense. perhaps shape down the connection speed of SSL connections since we know they're going to suck more bandwidth and reduce total possible client connections. or, if people would consider this, provide an encrypted VPN solution that people could connect to on the cache box and then do all their traffic via plain-text. another option: run an sslstrip-like app on any site that will work without ssl, and basically just circumvent security and tell the users not to expect privacy. more bandwidth or more security, you decide.
it should go without saying, but nazi-esque firewall policies implemented on the borders. block everything unless explicitly requested and with a good reason. use layer 7 filtering wherever possible to ensure they're really using those ports for what they say they are. if it's to use some common public service like AIM, only allow the servers that AIM uses.
first of all you have to determine capacity and limit use. you can't just allow a thousand retards to start torrenting every season of House from a fucking satlink. total outbound and inbound traffic must be regulated to allow a usable number of connections at a usable bandwidth (so for the sake of argument, slightly slower than a 56k modem). no more than $BANDWIDTH/$MODEM_SPEED streams at a time with a timeout (tcp keepalives disabled). big syn backlog buffer to wait for an available slot while trying to connect.
also you need to shape a couple protocols for less latency. ssh gets higher priority, but up to a certain amount of bandwidth... if an ssh session uses more than 5 megabytes of traffic, somebody is fucking scp'ing so kill that connection (not that they can't get around that with an rsync loop). SIP and some other protocols also low latency.
squid or some other more efficient proxy with HUGE cache store at the uplink point. dns proxy as well. also if it's not too much trouble, a pop3/imap caching server, and definitely an (authenticated) smtp relay to pass messages when the link is available again. run an ad-blocking thing in the proxy to strip out all unnecessary garbage content which would just suck up bandwidth otherwise. if you want to get retarded, block all streaming content. if you want to get SUPER retarded, limit allowed content to only a few MIME types (text/html, image/jpeg, text/plain, etc). allow for whitelists of commonly-hit, cacheable content among the stuff that's blocked.
it could be that additional routes are added to this one tiny uplink as the network grows. add a new caching server with the same tweaks at each router so that cache is kept at each subnet and also the main uplink point. this helps reduce bandwidth used getting to the uplink point itself, allowing your intermediary routes to also be weak/small.
what may also help is an additional proxy at the other side of the satlink which compresses content before being sent to the client pipe; kind of like Opera, this would (for example) compress images on a network which had fast internet access and then send across the slow satlink to the caching stuff, further decreasing delivery time and bandwidth.
SSL makes all this caching obviously more hairy and bandwidth demands more intense. perhaps shape down the connection speed of SSL connections since we know they're going to suck more bandwidth and reduce total possible client connections. or, if people would consider this, provide an encrypted VPN solution that people could connect to on the cache box and then do all their traffic via plain-text. another option: run an sslstrip-like app on any site that will work without ssl, and basically just circumvent security and tell the users not to expect privacy. more bandwidth or more security, you decide.
it should go without saying, but nazi-esque firewall policies implemented on the borders. block everything unless explicitly requested and with a good reason. use layer 7 filtering wherever possible to ensure they're really using those ports for what they say they are. if it's to use some common public service like AIM, only allow the servers that AIM uses.
yet more shitty vendor software
Oracle's Client installer is shitty. I think we all know that. There's some arcane mystical process in which you're supposed to figure out how to record an interactive install's options, then repeat the install later with a "silent install" using the pre-recorded steps. That usually doesn't work, either because the documentation is ages old or the syntax has changed or depends on some other shit they haven't told you about and provide no debugging information to figure it out. So pretty much the same as doing a Solaris JumpStart.
RSA is the new vendor hell i'm involved in. These morons couldn't create a software installer to save their lives. They have this huge installer just for their Access Manager product which provides all these dependencies that your OS already has (because it's impossible to just specify requirements for software anymore; they should really ship me a copy of Bash to run the install script). You run the installer bash script, which usually requires root and has about a billion hard-coded paths in it, so even if you pick a new install path it's only going to work in the pre-recorded path. There's also a bunch more scripts which get executed that modify system-level root-owned files, and there's no way to get around these hard-coded paths unless you 1. edit the install script, then 2. create a new RPM database in your user's home directory/modify your rpmmacros file, 3. unpack the RPM that the software comes in (which is for some reason packaged separately from the rest of the install software) and modify all the scripts and paths and re-pack into an rpm. That's for Linux; for Solaris it's near god-damn impossible without root, or just lots and lots more editing and unpacking by hand.
The end result is that you have to fucking re-engineer their whole installer just to get the god damn thing to install in more than one place and without root. What kind of morons are these people? Do they really expect someone with root credentials to sit there and babysit someone installing a shitty instance of their shitty product? I understand that creating user accounts just for the software is up to the root-owning sysadmins, but everything else probably needs to be done by somebody else and in multiple times and multiple paths. Having a "response file" to read and re-install with is nice, but only if you can fucking USE it for something, like actually installing to a non-standard path as a normal user.
Get with the fucking program, shitty 3rd party software vendors. Even Oracle has a "run-as-root.pl" file so the majority of the install can be performed by a not-root terminal monkey. And for fuck's sake, provide some simple documentation and explanation IN YOUR INSTALL FILES. I don't want to dig through 20 PDFs on your shitty knowledge base site or call tech support just to figure out how the the hell to create this god damn response file to install your crap. I have better things to be doing than figuring out your lame installer.
RSA is the new vendor hell i'm involved in. These morons couldn't create a software installer to save their lives. They have this huge installer just for their Access Manager product which provides all these dependencies that your OS already has (because it's impossible to just specify requirements for software anymore; they should really ship me a copy of Bash to run the install script). You run the installer bash script, which usually requires root and has about a billion hard-coded paths in it, so even if you pick a new install path it's only going to work in the pre-recorded path. There's also a bunch more scripts which get executed that modify system-level root-owned files, and there's no way to get around these hard-coded paths unless you 1. edit the install script, then 2. create a new RPM database in your user's home directory/modify your rpmmacros file, 3. unpack the RPM that the software comes in (which is for some reason packaged separately from the rest of the install software) and modify all the scripts and paths and re-pack into an rpm. That's for Linux; for Solaris it's near god-damn impossible without root, or just lots and lots more editing and unpacking by hand.
The end result is that you have to fucking re-engineer their whole installer just to get the god damn thing to install in more than one place and without root. What kind of morons are these people? Do they really expect someone with root credentials to sit there and babysit someone installing a shitty instance of their shitty product? I understand that creating user accounts just for the software is up to the root-owning sysadmins, but everything else probably needs to be done by somebody else and in multiple times and multiple paths. Having a "response file" to read and re-install with is nice, but only if you can fucking USE it for something, like actually installing to a non-standard path as a normal user.
Get with the fucking program, shitty 3rd party software vendors. Even Oracle has a "run-as-root.pl" file so the majority of the install can be performed by a not-root terminal monkey. And for fuck's sake, provide some simple documentation and explanation IN YOUR INSTALL FILES. I don't want to dig through 20 PDFs on your shitty knowledge base site or call tech support just to figure out how the the hell to create this god damn response file to install your crap. I have better things to be doing than figuring out your lame installer.
Friday, January 28, 2011
solaris admins are masochists
it's been a while since i had to do active development on solaris boxes. god they suck.
all that is just stuff that's happened to me within 5 minutes. jesus christ solaris, you've had a million years to catch up to usability of gnu tools. either replace your old broken shit or freshen it up. (while people still use your antiquated crap)
- tar doesn't natively handle compression and doesn't do simple things like "tar -xvf foo.tar" right (probably trying to extract to / by default).
- the default shell is not bash.
- ls doesn't understand simple things like `ls DIRECTORY -la` (change order of options) and it doesn't do color (supposedly because some retarded admins think a more optimal user interface is 'unprofessional').
- you have to fight with ps to get it to list anything the way you want unless you use -o and hope some other option you used doesn't break it.
- trying to figure out how much memory you have is a nightmare.
- most system information is complex and hidden behind solaris-specific tools or APIs.
- /export/home and /home exists (sometimes on different partitions), and for some reason root's home directory is / even though /root exists.
- the OS does nothing to set useful values like $PS1.
all that is just stuff that's happened to me within 5 minutes. jesus christ solaris, you've had a million years to catch up to usability of gnu tools. either replace your old broken shit or freshen it up. (while people still use your antiquated crap)
Tuesday, January 4, 2011
reality inside a video game
the thought occurred to me that in virtual worlds, avatars rarely (if ever) age. as far as i can tell, The Sims 2 is the only game where characters are born, grow old, and subsequently die. i think this paints a vivid picture of how we see our virtual worlds and how we wish to spend our time - that is to say, not thinking about mortality.
probably most video games which involve a [virtually] living entity involve certain properties of immortality. usually an avatar will find it difficult to die, and once they do die, are immediately given the opportunity to come back to life. it's left up to the user to determine if they should be brought back or not. they can be revived indefinitely if the user so chooses.
but where's the reality in that? there is no 'save point' in real life. if the user dies, it's game over permanently. in many aspects we have total control over our lives and in a very GTA way we can do anything we want. but we also have our own personal limitations and the limitations of the world around us. we are free to expand, only to be confined in boundaries.
so where is the birth? where is the growth, the learning, the adaptation and choices that shape our lives? why haven't we fully grasped those crucial factors of real life and distilled them to a video game form? to me, this is the ultimate video game: one in which just playing the game changes the shape and course of our lives. where we're no longer bystanders but active participants.
in my game the user gets only one avatar, and they have to follow it through its life. if the avatar dies, they don't get to play again until the average life expectancy of that character is up. there will be no way to circumvent this - no "ruling class" shaped by how much money the user pumps into the system to try to revive their character. everyone gets the same access and plays by the same rules. there will also be no "gold farming", no multiple accounts, trading or buying/selling. the ability to play this game will be rigidly structured, with the same building blocks, freedoms and boundaries as the user could find in our own world. this will be the most addictive game ever made.
obviously with such imposing limits on how and if the user can play, people will have to be very careful how they play. there will be no running around like a jack-ass and fucking with people, because who knows... they might just stab the user for being such a jack-ass. now he/she's dead, and they can't play again for 40 or 50 literal human years. kind of a morbid warning to others, but it reflects some of the motivations we all have to remain calm, respectful human beings instead of what we regress to on anonymous mediums without fear of retribution.
besides this "real reality" imposed by the limitations and fragility of an existence such as human beings, users will also experience what it is like to grow up inside the system. they'll start either from birth or from being very small children, perhaps with 'parents' or some other adult guardian figure. the hope is that we can actually teach people lessons about life. obviously most users will already have the knowledge of a teenager or older, so basic concepts like reading, history, etc may not be necessary. but the interaction with other avatars - also controlled by users jsut like them, in an environment not unlike the real world - may help them realize things about life they hadn't noticed before. in this world the user isn't the same person - they are someone unique, and they don't get to decide who they are. by living life in the shoes of a random individual they may find new things to discover about the real world. in this way, the user actually learns as the avatar does.
this whole concept hinges on the idea of imposing all the restrictions of the real world in the virtual one. an avatar cannot be allowed to have anything given to them except what they may receive as a normal part of trade or the economy of the virtual world. users may not "inject" currency into the system - as in the real world, they have to earn, steal or be gifted anything they want or need.
this brings up some more real-life aspects not often found in video games (except perhaps the Sims): food, clothing, shelter. we all need it, and it's almost never for free. unless someone decides to build a homeless shelter and find a way to gather the resources to give this stuff away for free, everything is simply acquired via the standard means used in the real world. but the avatar must eat, and must sleep, and must be kept in good health. all of these things are related to how we interact with our real world and how we live and grow and learn, and thus must be replicated in the virtual world.
as in the real world, there will be ways for avatars to go to school, read books, listen to music, play basketball... anything we can think of to try to replicate the common human experience. but this also includes the negative aspects of our human condition. selling drugs, molesting children, murdering, overthrowing governments. oh yes, we'll need a government and people to enforce its laws. i think it will be interesting to start with anarchy, and see if a system of government (or perhaps a cult?) develops to enforce rule of law and order. because many people will be apathetic to the idea of this virtual world, many people may begin playing and immediately try to wreak havoc. this is part of how starting as a child may be beneficial; they may not be able to attain the resources to cause much trouble. but the people will also need to police themselves and try to prevent malice from destroying what they've built for themselves.
the idea is difficult to realize. to build an entire real world in a computer... is this not what The Matrix is supposed to be? it seems like an enormous undertaking, and one fraught with trouble. but as we learned from The Matrix, there must be flaws for it to be convincing. it must be a harsh world if people are going to recognize it as real. however, this world will have no AI (until at some point the scale of the system requires it). any AI that people could actually interact with would defeat the purpose of the entire system. in a realistic world, we build it, we shape it. there may be things about nature that help define what it is, but ultimately we determine our own destinies. an AI mucking things up would be like a God that gets to decide if you get ice cream or don't get ice cream. in truth, we all know it is us that makes such decisions.
probably most video games which involve a [virtually] living entity involve certain properties of immortality. usually an avatar will find it difficult to die, and once they do die, are immediately given the opportunity to come back to life. it's left up to the user to determine if they should be brought back or not. they can be revived indefinitely if the user so chooses.
but where's the reality in that? there is no 'save point' in real life. if the user dies, it's game over permanently. in many aspects we have total control over our lives and in a very GTA way we can do anything we want. but we also have our own personal limitations and the limitations of the world around us. we are free to expand, only to be confined in boundaries.
so where is the birth? where is the growth, the learning, the adaptation and choices that shape our lives? why haven't we fully grasped those crucial factors of real life and distilled them to a video game form? to me, this is the ultimate video game: one in which just playing the game changes the shape and course of our lives. where we're no longer bystanders but active participants.
in my game the user gets only one avatar, and they have to follow it through its life. if the avatar dies, they don't get to play again until the average life expectancy of that character is up. there will be no way to circumvent this - no "ruling class" shaped by how much money the user pumps into the system to try to revive their character. everyone gets the same access and plays by the same rules. there will also be no "gold farming", no multiple accounts, trading or buying/selling. the ability to play this game will be rigidly structured, with the same building blocks, freedoms and boundaries as the user could find in our own world. this will be the most addictive game ever made.
obviously with such imposing limits on how and if the user can play, people will have to be very careful how they play. there will be no running around like a jack-ass and fucking with people, because who knows... they might just stab the user for being such a jack-ass. now he/she's dead, and they can't play again for 40 or 50 literal human years. kind of a morbid warning to others, but it reflects some of the motivations we all have to remain calm, respectful human beings instead of what we regress to on anonymous mediums without fear of retribution.
besides this "real reality" imposed by the limitations and fragility of an existence such as human beings, users will also experience what it is like to grow up inside the system. they'll start either from birth or from being very small children, perhaps with 'parents' or some other adult guardian figure. the hope is that we can actually teach people lessons about life. obviously most users will already have the knowledge of a teenager or older, so basic concepts like reading, history, etc may not be necessary. but the interaction with other avatars - also controlled by users jsut like them, in an environment not unlike the real world - may help them realize things about life they hadn't noticed before. in this world the user isn't the same person - they are someone unique, and they don't get to decide who they are. by living life in the shoes of a random individual they may find new things to discover about the real world. in this way, the user actually learns as the avatar does.
this whole concept hinges on the idea of imposing all the restrictions of the real world in the virtual one. an avatar cannot be allowed to have anything given to them except what they may receive as a normal part of trade or the economy of the virtual world. users may not "inject" currency into the system - as in the real world, they have to earn, steal or be gifted anything they want or need.
this brings up some more real-life aspects not often found in video games (except perhaps the Sims): food, clothing, shelter. we all need it, and it's almost never for free. unless someone decides to build a homeless shelter and find a way to gather the resources to give this stuff away for free, everything is simply acquired via the standard means used in the real world. but the avatar must eat, and must sleep, and must be kept in good health. all of these things are related to how we interact with our real world and how we live and grow and learn, and thus must be replicated in the virtual world.
as in the real world, there will be ways for avatars to go to school, read books, listen to music, play basketball... anything we can think of to try to replicate the common human experience. but this also includes the negative aspects of our human condition. selling drugs, molesting children, murdering, overthrowing governments. oh yes, we'll need a government and people to enforce its laws. i think it will be interesting to start with anarchy, and see if a system of government (or perhaps a cult?) develops to enforce rule of law and order. because many people will be apathetic to the idea of this virtual world, many people may begin playing and immediately try to wreak havoc. this is part of how starting as a child may be beneficial; they may not be able to attain the resources to cause much trouble. but the people will also need to police themselves and try to prevent malice from destroying what they've built for themselves.
the idea is difficult to realize. to build an entire real world in a computer... is this not what The Matrix is supposed to be? it seems like an enormous undertaking, and one fraught with trouble. but as we learned from The Matrix, there must be flaws for it to be convincing. it must be a harsh world if people are going to recognize it as real. however, this world will have no AI (until at some point the scale of the system requires it). any AI that people could actually interact with would defeat the purpose of the entire system. in a realistic world, we build it, we shape it. there may be things about nature that help define what it is, but ultimately we determine our own destinies. an AI mucking things up would be like a God that gets to decide if you get ice cream or don't get ice cream. in truth, we all know it is us that makes such decisions.
Subscribe to:
Posts (Atom)