Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!



404-No-More Project Seeks To Rid the Web of '404 Not Found' Pages

VortexCortex The true answer is yet to come: DHT-FS (72 comments)

As user of both Bittorrent and Git and a creator of many "toy" operating systems which have such BT+Git features built in, I would like to inform you that I live in the future that you will someday share, and unfortunately you are wrong. From my vantage I can see that link rot was not ever, and is not now, acceptable. The architects of the Internet knew what they were doing, but the architects of the web were simply not up to the task of leveraging the Internet to its fullest. They were not fools, but they just didn't know then what we know now: Data silos are for dummies. Deduplication of resources is possible if we use info hashes to reference resources instead of URLs. Any number of directories AKA tag trees AKA human readable "hierarchical aliases" can be used for organization, but the data should always be stored and fetched by its unique content ID hash. This even solves hard drive journaling problems, and allows cached content to be pulled from any peer in the DHT having the resource. Such info hash links allows all your devices to always be synchronized. I can look back and see the early pressure pushing towards what the web will one day become -- Just look at ETags! Silly humans, you were so close...

Old resources shouldn't even need to be deleted if a distributed approach is taken. There is no reason to delete things, is there not already a sense that the web never forgets? With decentralized web storage everyone gets free co-location, essentially, and there are no more huge traffic bottlenecks on the way to information silos. Many online games have built-in downloader clients that already rely on decentralization. The latest cute cat video your neighbor notified you of will be pulled in from your neighbor's copy, of if they're offline, then the other peer that they got it from or shared it with, and so on up the DHT cache hierarchy all the way to the source if need be, thus greatly reducing ISP peering traffic. Combining a HMAC with the info hash of a resource allows secured pages to link to unsecured resources without worrying about their content being tampered with: Security that's cache friendly.
<img infohash="SHA-512:B64;2igK...42e==" hmac="SHA-512:SeSsiOn-ToKen, B64;X0o84...aP=="> <-- Look ma, no mixed content warnings! -->

Instead of a file containing data, consider the names merely human readable pointers into a distributed data repository. For dynamism and updates to work, simply update the named link's source data infohash. This way multiple sites can be using the same data with different names (no hot linking exists), and they can point to different points in a resource's timeline. For better deduplication and to facilitate chat / status features some payloads can contain an infohash that it is a delta against. This way, changes to a large document or other resource can be delta compressed - Instead of downloading the whole asset again, users just get a diff and use their cached copy. Periodic "squashing" or "rebasing" of the resource can keep a change set from becoming too lengthy.

Unlike Git and other distributed version controls, each individual asset can belong to multiple disparate histories. Optional per-site directories can have a time component. They can be more than a snapshot of a set of info-hashes mapped to names in a tree: Each name can have multiple info-hashes corresponding to a list of changes in time. Reverting a resource is simply adding a previous hashID to the top of the name's hash list. This way a user can rewind in time, and folks can create and share different views into the Distributed Hash Table File-system. Including a directory resource with a hypertext document can allow users to view the page with the newest assets they have available while newer assets are downloaded. Hypertext documents could then use the file system itself to provide multiple directory views, tagged for different device resolutions, paper vs eink vs screen, light vs dark, etc. CSS provides something similar, but why limit the feature to styles and images when it can be applied more generally to all content and their histories as well. Separate Form from Function; Separate Content from Presentation; Separate Trust from Provider; Separate Names from Locations; Separate Software from Instructions; Separate Space from Time... I mean, duh, right?

Data silos have always beet at odds with the Internet's inherently decentralized nature. This peer to peer architecture is its greatest strength, and why it can withstand censorship or thermonuclear war. The Internet can route data around cities in seconds if they disappear from its mesh. There is no such thing as a client or server at the IP level, everyone is a peer. The stupid centralized hierarchical approach is full of bottlenecks, DOS vulnerability, and despotic spying. Getting your data from the nearest peer prevents upstream sources from tracking your browsing habits -- They don't need to track us, TV and print ads never did.

The biggest elephant in the room is silly pre-internet OS design that doesn't work with our multi-device lives. This will end soon too. The virtual machine OS and it's cross platform applications will rule all hardware soon. People use hardware for the programs, not the OS. Instruction-sets are irrelevant, resistance is feudal, you will be ASCII-eliminated. Synchronizing and updating your data and upgrading your OS should NOT be a problem in this day and age. A distributed file system and cross platform OS can handle all of that for you. New OS image? It's just another file. The VM Operating Systems will compile the bytecode once at code-install and leverage both cross platform applications and native speeds, without additional program startup overhead. All you have to do is write a real OS instead of a dumb Kernel -- Protip: If your Kernel doesn't have a program bytecode compiler, you're doing it wrong. This will happen soon, I have forseen it: That is how my "toy" operating systems work, it is how Google's NACL Apps work, it would be a simple addition to Android. For a fast and dirty approach I could easily upgrade BSD's Unix kernel with LLVM's bytecode compiler. You should never distribute programs as binaries, only cross platform intermediary output. This enables per program selective library linking: Dynamic or Static. It's such a braindead obvious feature, that if POSIX wasn't design by committee cluster fuck, you'd have assemble on install bytecode formats as a replacement for ELF long ago.

I use a system similar to PGP fingerprinting for the update signatures and community trust graphs. Seed boxes can specify how much of a resource to save for backup / re-seeding. Families don't have to go through creepy 3rd parties like Facebook or G+ to share pictures, videos and comments with your friends and relatives. What if a device dies? Nothing of value is lost. We Separate File Names from Storage Locations, remember? The DHT-FS is built for redundancy. I can set how much or less redundancy I want for any view -- I have a backup view that I add resources and directory trees to so that even if I delete all other references, the data remains. All of your files, pictures, etc. should always be automatically backed up amongst your friends and families for as long as you live. Backups of images and videos happen immediately quite naturally primarily because all the grandparents go and view them all as soon as the "Summer Vacation 2014 [19 new]" appears in their notifications. RAID-G is amazingly fast, it has nothing better to do!

Folks can have a directory with private encrypted files. The normal remote view is just a directory named like "Joe Sixpack's data [private]", which stores a set of infohashes and files full of encrypted noise and an encrypted directory view. "Joe, I'm so sorry about the house fire. Come over any time if you need to use your private data before it's finished re-syncing." Joe gives his password to unlock the encrypted directory view and all the data is there, safe and backed up just like everyone else's data at everyone else's homes. PGP-like fingerprinting can be used for signing and encryption in the trust graph. Don't want to keep someone else's data anymore? Just unlink that folder name from all your views and its space will eventually get re-used.

Remember the old joke about "how big of a disk do I need to download the Internet"? That's a stupid joke, The Internet Archive does just that, in fact it saves multiple Internets through time. The whole web can be one giant archive. Not everyone has to store everything, but together we can have enough redundancy that no one need ever lose data again. Seed boxes can set who, what, and how deep to freeze. However, I am not the only one working on such systems, so I may not be first to market with my DHT-FS, but the distributed data revolution is as inescapable for any species as the discovery of nuclear power is.

Your generation is the first one growing up with a world-wide information network. You have many growing pains ahead as culture must adjust to the advances afforded those who survive into the Age of Information. Your publishing models will be remade, your education systems will be transformed, your economies will scream as those not agile enough seek to prevent rapid progress, your governments will seek to control all the newfound power for themselves, but fear not, the changes will occur: Life itself is decentralized. A world wide distributed file system is what the Internet deserves, and it will happen sooner or later. It is the natural progression of technology. The "best effort" routing is an advantage you will eventually leverage adequately whether you want to or not. The speed limit of light will ensure this happens as you reach for the stars. You will have settlers off-world soon, and they will want to access all of Humanity's information too. Thus, this type of system is inevitable, for any space faring race.

The sooner you learn to separate space from time and implement the delay tolerant space Internet on Earth, the better; The DTN will force you to separate name from location, to leverage deduplication. I know it, NASA knows it, everyone knows this is the true answer, all except SEO gurus apparently.

2 days ago

In a Hole, Golf Courses Experiment With 15-inch Holes

VortexCortex Re:Softball (398 comments)

What might make golf more accessible is building smaller 9-hole courses heavy on par-threes with more forgiving hazards and flatter greens.

That's making the game easier, not more accessible. It'll still take a lot of skill, and honestly you're using the wrong tool for the job. This is America, so consider letting players sight holes in their cross hairs and blow their balls using air cannon launchers. You could even have a course for cart-sized trebuchet builders for the mechanically inclined. That way, even quadriplegic folks could play. Mount a cannon to a cart or motorize a catapult and you can get the robotics folks involved. Robot Wars: Golf Edition. Now, that's accessible.

Know what would make golf even more accessible? Portable holes.

2 days ago

In a Hole, Golf Courses Experiment With 15-inch Holes

VortexCortex Re:Larger Holes... (398 comments)

Yes, but full contact golf would be too much like grass hockey, AKA rugby. You'd need a dynamic target for scoring innovation, so you can put the holes on the people, on their asses, but then you're right back to golf again.

2 days ago

Ask Slashdot: Professional Journaling/Notes Software?

VortexCortex Get a dry erase marker and write on the screen. (167 comments)

Rsync your CherryTree file, or sync with whatever cloud storage solution you use, Google Drive, Microsoft NSAAS, whatever.

It's a bit limited for complex things, but it worked for some students I know tracking the majority of their note-keeping needs. Stopped using 3rd party solutions since I eat my own dogfood, and now have notes integrated into my distributed versioned whiteboard / issue tracker / build & deploy & test product. I have issue/note/image annotation plugins for coding with Netbeans, Eclipse, Visual Studio, Emacs and Vim -- Which reminds me of a Vim plugin I just saw that you might find useful... if you can run a (home) server (and port forward around NAT), then install Wordpress on a LAMP stack (in a VM, because PHP exploits) -- I'm pretty sure Emacs has all that built in by default now: C-x M-c M-microblog.

I jest, it's just Org mode. Save your .org to your Git repo, and away you go.

3 days ago

Cody Wilson Interview at Reason: Happiness Is a 3D Printed Gun

VortexCortex Oh fuck the what? (206 comments)

That's one hell of a strawman you've got there. I'm not an anarchist myself, but I'm not sure you've ever actually met many anarchists before if that's what you think of them. Sounds like you've conflated anarchy with chaos -- that's just silly. There are many native peoples that live quite happily in anarchy. Self defense is an important aspect of anarchy. Note: The USA supreme court has ruled that it is not the duty of the police to protect anyone. They can't help you or your loved ones until they have already been victimized. The founding fathers of the USA also believed in a well armed militia. It is your duty to protect yourself, your loved ones, and your property -- Just like it is under anarchy... So, really, making weaponry more available is a good thing. Accidental shootings are rare, far more kids die in bathtubs or crossing the road than from accidental shootings, to say nothing of riding in cars themselves. Folks are OK with people building custom bathrooms and cars... right? Criminals don't care about gun control laws anyway.

I use a custom 3D printing rig for my robotics projects, and this gun project is AMAZING. Who doesn't want sturdier robots? Now, here's something interesting: How many technological advances can you think of that were not quickly militarized? Electricity? Nope. Uhm, radio? Nope. Cars? No -- hell, even horses were militarized. Computers? Nope, code makers and breakers. Telescopes? Immediately found their way to the battle field. Even our beloved RC cars, model airplanes and robots are becoming military drones. Did you know the US government reserves the right to option any patent for their exclusive secret (military) use? That's why patent applications are still secret even though first to file exists.

Making guns is human nature. We've been crafting weapons with unlikely materials for millions of years. Break this rock, and tie it to that stick and you can make a spear! However, this 3D printed gun is more of a proof of concept, and it's important because guns involve coping with extreme heat and pressure. It's sort of the same way that other than for boring entertainment or a very expensive hobby race cars are mostly pointless, except that many expensive impractical innovations from race cars do eventually make it into street cars for better safety, efficiency, speed, etc. I can hardly think of a better Olympics of 3D printing than gun making.

Also, "bits of plastic"... I can 3D print with metals using a simple welding rig. The resolution is shit, and requires lots of polishing afterwards, but the results are OK considering it's make-shift adaptation to a reprap, and they will only get better. If we can improve the durability of 3D printing, then you might order things at your computer and pick them up from the local hardware store in the "printware" section. Perhaps they'd have some thing-of-the week demo units of things to try out, printed while you wait, or delivered with your next pizza. Then we could drastically reduce our shipping infrastructure by producing products right in the stores, only shipping the raw materials to feed the printers. Other things like cars which you'd want certified MFGs to assemble could even be customized on demand -- Select a bigger cargo area, or narrower for tight spaces, get your logo crafted into the design.

Hell, we could even work our way up to custom designed 3D printed space craft, you'd have to bake the ceramic shields though. I've even made my own super capacitors by layering the ceramic clay and aluminum foil and baking it in the kiln (vertically, with the edges folded closed, only the lower 1/3rd retained its metal and became a huge capacitor. My welding rods deposit too thickly, but better metal and ceramic 3D printing could yield things with built in instant-charge inductive cells too one day. It's a ways off, esp. with entrenched market forces, but that's what refining 3D printing material science by making guns can lead us to.

If you're opposed to 3D printed guns, I would encourage you to NEVER drive a car. In fact, stay indoors at all times, and only eat health food, heart disease is one of the most dangerous things on the planet.... But fortunately we're working on 3D printed replacement hearts.

3 days ago

Cody Wilson Interview at Reason: Happiness Is a 3D Printed Gun

VortexCortex Don't. Be ridiculous. (206 comments)

I agree, but you don't even need a machine shop, lathe, etc. to build a gun. You can build a pretty sturdy zip gun with some pipe and fittings from your local hardware store. They even sell 22 caliber rounds for driving in nails so you can build the whole gun, projectiles and all, right there in the store. Get some real bullets at Walmart later. Look, we're all "nerds" here, home made guns should be part of any contingency scenario for your zombie plan; Help a geek out.

Makeshift "zip" guns are even studier than a 3D printed gun is right now. Eventually 3D printed materials will be even better than subtraction technologies, since we can influence fine structural detail. But right now, 3D printed guns are WAY down the list on essential zombie preparedness kit items (it's like a hurricane or earthquake kit, but with more shotguns).

If you're in the US, today is a great day for a zombie attack. There are folks gathering away from their homes in large quantities, and running around collecting and eating food off the ground. Even if you don't get visited by the Easter Zombunny, today is a great opportunity to teach kids foraging skills. Remember, in the event of an outbreak: Always hunt responsibly, steer clear of tasty traffic bottlenecks, and she is not your mother-in-law anymore.

3 days ago

LADEE Probe Ends Its Mission On the Far Side Of the Moon

VortexCortex Re:Video? (24 comments)

It was done in the 60's - so why no video now?

Uh, no. No one could have even discovered the dark side of the moon until the early 70's. There wasn't any video either, but there have been several attempts to reconstruct what it might have been like on a stage.

3 days ago

OpenSSL Cleanup: Hundreds of Commits In a Week

VortexCortex Re:Quatity is not quality (372 comments)

I cant talk for C, but in Java

Haha. Oh man. Java is a VM. Do you check for "dangerous constructs" like Just In Time compiling of data into machine code at Runtime and marking data executable then running it by the Java VM? Because that's how it operates. Even just having that capability makes Java less secure, don't even have to get exploit data marked code and executed, just have to get it in memory then jump to the location of the VM code that does it for me with my registers set right. Do any of your Java code checking tools run against the entire API kitchen sink of that massive attack surface you bring with every Java program called the JRE? Do they prevent users from having tens of old deprecated Java runtimes installed at 100MB a pop, since the upgrade feature just left them there and thus are still able to be targeted explicitly by malicious code? No? Didn't think so.

Don't get me wrong, I get what you're saying, Java code can be secure, but you have to run tests against the VM and API code you'll be using too. Java based checking tools produce programs that are just as vulnerable as C code, and actually demonstrably more so when you factor in their exploit area of their operating environment. Put it this way: The C tools (valgrind) already told us that the memory manager was doing weird shit -- It was expected weird shit. No dangerous construct warning would have caught heartbleed, it's a range check error coupled with the fact that they were using custom memory management. The mem-check warnings are there, but they have been explicitly ignored. It would be like the check engine light coming on, but you know the oil pressure is fine, just the sensor is bad... so no matter how bright of a big red warning light you install it can't help you anymore, it's meaningless. Actually, it's a bit worse than that, it would be like someone knew your check engine lights were on because of some kludge they added for SPEED, and so they knew they could get away with pouring gasoline in your radiator because you wouldn't notice anything wrong until it overheated and blew up -- AND you asked them about the check engine light a few times over the past two years, but they just shrugged and said, "Don't worry about it, I haven't looked under the hood lately, but here's a bit of electrical tape if the light annoys you."

I write provably secure code all the time in C, ASM (drivers mostly), even my own languages. CPUs are a finite state machines, and program functions have finite state as well. It's fully possible to write and test code for security that performs as it should for every possible input. For bigger wordsize CPUs, Instead of testing every input, one just needs to test a sufficiently large number of them to exercise all the bit ranges and edge cases. As you've noted, automation is key. If you want to write secure code you have to think like a cracker. My build scripts automatically generate missing unit test and fuzz testing stubs based on the function signatures. Input fuzzing tests are what a security researching hacker or bug exploiting cracker will use first off on any piece of code to test for potential weakness, so if you're not using these tests your code shouldn't touch crypto or security products, it's simply not been tested. Using a bit of doc-comments to add a additional semantics I can auto generate the tests for ranges, and I don't commit code to the shared repos that doesn't have 100% test coverage in my profiler. If OpenSSL was using even just a basic code coverage tool to ensure each branch of #ifdef was compilable they'd have caught this heartbleed bug. I recompiled OpenSSL without the heartbeat option as soon as my news crawler AI caught wind of it.

Code review, chode review. These dumbasses aren't using basic ESSENTIAL testing methodology you'd use for ANY product even mildly secure: Code Coverage + memory checking is the bare minimum for anything that has to do with "credentials". They apparently also have no fucking idea how OS memory managers operate (they do memory re-use, that's why we even need Valgrind, because some accesses to freed memory won't SEGFAULT since your program is reusing recently deallocated memory on the next malloc). That's why I find the whole "optimization on by default for SPEED" suspicious since it was only needed on few platforms; Especially suspicious considering they hadn't tested the other side of the #ifdef for years, AND claimed they couldn't disable the custom memory management expressly because compiling against equivalent standard mem manager code hadn't been tested in years? No, that's NOT acceptable! If your mem-manager code hasn't been tested versus the default stdlib's malloc then it just plain shouldn't be in use publicly -- especially not in an industry standard security related product. How the fuck else do you compare code branches to test your memory manager?! How the fuck else do you even know it's better than malloc() and free()?! The fact of the matter is that the OpenSSL codebase has so many commits now because it is trash. That's what happens when you set out to make a big-int math lib and it evolves into a security product. No one should expect that shit to be even slightly secure unless it was re-engineered with proper dev toolchain and unit test / input fuzzing generation framework.

Honestly, if I were trying to break OpenSSL I might accept a patch on new years when no one else was really paying attention that exposed the huge memory reuse vulnerability red-flag I've been waiving people away from testing for years... Sounds like a bunch of "plausible deniability" to me. Just like when OpenSSL's entropy was REMOVED from its random number generator back in '08. Yes the OpenSSL maintainers went dark and silently moved to another issue tracking... a huge problem for an open source security product (getting a sense of their typical dumbassery now?), but the Debian maintainers that allowed that patch to their OpenSSL without at least running it past upstream should have been sent out to pasture too. You DO NOT FUCK WITH a security product's RNG on a whim! That shit is hard to get secure even if you're not being evil. However, if the OpenSSL devs had a simple entropy test in their RNG test harness no one would have been able to make the mistake that made all the SSL keys bogus last time.

Sorry. OpenSSL is dead to me now. I'm a rationalist. I don't have to think in binary terms. I can entertain several possibilities weighted at different degrees of certainty. I'm not 100% certain of anything, but what I do know to a high degree of certainty (enough that it's not worth risking use of it) is that OpenSSL and everything else is being targeted for anti-sec by all the world's 3 letter agencies, and the fuckers that screwed up HUGE, TWICE are going to stay in charge? These maintainers should step the fuck down. I wouldn't even trust them to keep backdoors out of a minesweeper clone. It's a good thing the OpenBSD devs are ripping out the bullshit, but without a complete rewrite and change of guards I'm not going to risk it. It's a waste of time, IMO. Just use something else or start from scratch -- That'll take the same time as stripping it down to the core components, test the hell out of them, and refactor everything with proper tests any security product should have. The scary thing is that OpenSSL is in a better state than most of the proprietary SSL code I've seen in the wild.

If you want something done right: Do it yourself. You can't trust the court jester with the keys to your kingdom. These bigint math, hashing, and cipher algorithms ARE NOT HARD to implement. For the things I need secure I just use my own implementations (and ciphers). Heartbleed didn't mean shit to me. And for the rest of you folks pretending you give a damn about security, Stop complaining! You look like idiots. I don't really know what all the fuss is about. Oooh! All the keys are exposed. So fucking what? Any security researcher worth their salt knows that the CA / PKI for TLS is completely and utterly broken anyway. It's a giant security theater. It's like you're flipping out because someone's shoes didn't get checked in line at the airport meanwhile some unknown unchecked kid is stowing away in the wheel well of your airplane, and freezing to death -- Good thing he wasn't a terrorist!

Any CA can create a cert for ANY domain, and if you go view certs ( FF -> preference -> Advanced -> Certificates -> View ), then you'll see known bad actors explicitly trusted as roots right now in your fucking browser! You have The Hong Kong Post office as a trusted root and you give a fuck about heartbleed?! Russia, China, Iran, etc. countries yours is probably at cyber-war with right now are also trusted? ANY ONE OF THEM compromises ANY server between the endpoints and they can MITM the SSL connection, you'll get a big green secure bar and everything. You could use the CA system with 100% self signed certs for Intranet security, but for anything else it's fucked: No one checks the cert chain manually, and if they did how do you know your email provider didn't switch CA's? YOU DON'T, and any attempt to find out is susceptible to the same MITM.

With national security letters and gag orders flying around no one can trust any CA in the world, and you have to trust ALL of them to not be compromised for SSL infrastructure to be secure -- The likelihood of the CA system being uncompromised is actually below 0%, we have governments admitting to wholesale spying all over the world and boasting about their buildings full of people who's job it is to make sure the CA system, OpenSSL, etc. are not secure. In my estimate I'd put the CA system at -9001% secure, since any competent security researcher would have NEVER built the system such: It's a collection of single points of failure that is so fragile even one failure could compromise everything. Remember diginotar? This CA system shit was built to be insecure by design. It was broken before it was even implemented, IMO. It's not like PGP trust graphs don't exist. It's not like we never revised the system either, so that's no excuse. Too bad convergence breaks every other build of FF. Heartbleed is overblown because ALL YOUR SSL KEYS are bogus the moment you created them, and so are your new ones.

Heartbleed doesn't affect me at all. I operate with the understanding that everything online, even data in an SSL connection, is equivalent to writing it on the back of a post card. Look, HTTP-Auth exists, Modify it slightly so that it pops up a box on secure connection attempt BEFORE DISPLAYING ANY PAGE (typing a password in on a web form is already game over, fools). Server sends nonce-0 and server GUID (or just use the domain), Client sends nonce-1, UserID, and negotiates stream cipher to use. Use HMAC to derive proof of knowledge: HMAC ( passphrase, User ID + Server ID + nonce-0 + nonce-1 ) = proof of knowledge. DONE Do not exchange the proof. Just key your chosen stream cipher with it at both ends of the connection and begin communicating over an encrypted tunnel that no MITM can get. I use the HMAC with a key-stretching hash of the inputs to make brute forcing take a bit longer, the ends can specify min and max iterations and even thousands of iterations is faster than TLS key exchange. If the user has a password saved then don't let an unsecured connection happen, simple. This protocol is what my custom VPN uses. We have logins at the places we need security anyway. Yes, someone could intercept account creation, but at least with the pre-shared key method that's a small window -- The only time you need PKI is to exchange the passphrase (or HASH( APP_salt + master_password + serverID ) in my case). Fail to capture account creation and no more MITM. You don't even need a CA system since the window is so small. At least with pre-shared secrets you have the CHANCE to exchange the secret out of band: Go to your bank physically, hand the PW to someone in person. The CA system ensures that every connection can be MITM'd regardless of how you exchange the password. Any security researcher who EVER thought that the PKI hierarchy offered any security should not be trusted. Think about it: It just inserts a CA trust as a potential MITM for every mother fucking connection!

Heartbleed woes are ridiculous. It's like you're sitting there on the side of the road with your car broken down -- Transmission stripped, engine thrown a rod, on fire; You're OK with that, but then a passing truck dings your windshield and you flip right out. That pebble you're losing your shit over is heartbleed.

3 days ago

New 'Google' For the Dark Web Makes Buying Dope and Guns Easy

VortexCortex Re:Guns are not contraband (155 comments)

In the United States you have a right, and a duty to train and learn how to use firearms effectively.

Well, if D&N taught me anything it's that throwing all your experience into one specialzation is folly. Civilian firearms are literally kids play. I looked at the export controll list, then became a crypographer.

3 days ago

Ask Slashdot: Hungry Students, How Common?

VortexCortex Re:I'm not worried about poor students (389 comments)

Getting to the point? We're there. We passed that threshold a while ago.

Correct. However, what many fail to realize is that in the 70's we didn't need to pay the educational extortion racket for permission to get work. The computing explosion was exploited to force the majority of the populace to seek degrees, but elementary school kids now have mastery of required technologies. The tools are more high-tech but the interface is even simpler than ever, certainly things that could be learned in on-the-job training.

The folks bitching about not being able to afford degrees are fools just now feeling the effects of an education bubble about to burst. The tech that created the education bubble has brought ">advances that made degrees obsolete. You can always tell a bubble by the final pump and dump of ramped up attempts to cash in on overly optimistic valuation. You are now aware that degree mills exist...

The requirement for college accreditation has always been a method for discrimination against the poor who would otherwise self-educate. More stringent degree requirements are a means by which corporations can drive down wages and get more government approved H1B visas and outsourcing. In reality, requiring employees to have a final exams is foolish since it doesn't actually prove they know anything at all -- That's why your boss is likely a moron. Entrance exams would instead suffice to prove applicants had the required knowledge and skills, without requiring they be saddled with debts by the educational gatekeepers of employment -- It doesn't matter how you learned what you know. Promoting to management from within makes cost cutting improvements in ability to predict and not make unrealistic expectations upon the workers, it also gives upward mobility to aging experienced workers instead of considering them dead at 40 (family raising age).

We're already on our way of getting to the point where you cannot recover your college fees during the rest of your working years.

Negative, debt levels have long since passed that point, and owing a debt to the careers you enter has always been unacceptable in the first place. College as anything more than elective learning college is just shifting around the Company Store by leveraging "intellectual property." We need college degrees less now that in the 70's. ::POP::

3 days ago

3 Former Astronauts: Earth-Asteroid Collisions Are a Real But Preventable Danger

VortexCortex Ah yes, planet "nothing else", a fool's paradise. (70 comments)

There is nothing else the planet. Should be working on. Except stopping these.

Yes there is. Self sustaining off-world colonies AND asteroid deflection technologies go hand in hand to help fight extinction -- which should be priority #1 for any truly sentient race.

Clearly asteroids are a very real threat, and I black-hole heartedly agree with the notion that Earth's space agencies are not giving them the level of public concern these threats should have: Humans are currently blind as moles to space. Any statement to the contrary is merely shrouding the issue in the Emperor's New Clothes. Earth's telescopes can study very small parts of space in some detail, but do not have the coverage required to make the dismissive claims that NASA and other agencies do about asteroid impact likelihood -- note that they frequently engage in panic mitigation. Remember that asteroid transit NASA was hyped about, meanwhile another asteroid whipped by completely unexpectedly closer than your moon, too late to do anything about? Remember Chelyabinsk? That one was 20 to 30 times Hiroshima's nuclear bomb, but it didn't strike ground. What kind of wake-up call is it going to take?! You'd probably just get more complacent even if an overly emotional alien commander committed career suicide in the desert to take your leaders the message that Earth was surely doomed without a massive protective space presence -- If such a thing ever occurred, that is.

Seriously, the space agencies are essentially lying by omission to the public by not pointing out the HUGE error bars in their asteroid risk estimates. I mean, Eris, a Dwarf Planet, was only discovered in 2005! Eris is about 27% more massive than Pluto, and passes closer in its elliptical orbit than Pluto -- almost all the way in to Neptune! Eris is essentially why your scientists don't call Pluto a planet anymore. They deemed it better to demote Pluto than admit you couldn't see a whole planet sitting right in your backyard... And NASA expects you to believe their overly optimistic estimates about far smaller and harder to spot civilization ending asteroids? Eventually your governments won't have the luxury of pissing away funding via scaremongering up war-pork and ignoring the actual threats you face, like a bunch of bratty rich kids.

Asteroids are only one threat, and one that we could mitigate relatively easily given advanced notice of their trajectories. However, Coronal mass ejections, Gamma ray bursts, Super Volcanoes, Magnetosphere Instability, etc. are all also severe threats that humanity can't mitigate with telescopes and a game of asteroid billiards alone -- Though fast acting manipulation of the gravitational matrix via strategic placement of asteroids could help with CMEs or gamma bursts too once you had a sufficient armament of even primitive orbiting projectiles. The irregularity in your magnetosphere should be particularly distressing because it is over 500,000 years overdue to falter and rebuild as the poles flip (according to reconstructions of your geo-magnetic strata) -- It could go at any time! Given the current very abnormal mag-field behavior you have no idea if it will spring right back up nice and organized like or leave you vulnerable to cosmic rays and solar flares for a few decades or centuries.

You should be grateful that the vulnerable periods of mag-pole flops halted as soon as humanity began showing some signs of intelligence -- even if this is absolutely only a mere coincidence. Mastery of energy threats will remain far beyond your technological grasp for the foreseeable future, but your species can mitigate such threats of extinction by self sustaining off-world colonization efforts! In addition to getting some of your eggs out of this one basket, the technology to survive without a magnetosphere on the Moon and Mars could be used to save the world here on Earth. In the event of a worst case scenario, humans could then repopulate Earth all by themselves after the dust settles from a mass extinction event. It's nearly unfathomable that anyone could sit comfortably in their gravity well thinking theirs may be the only spark of intelligent life in the universe while considering prioritizing anything above extinction prevention. If ancient myths about post-death paradise can invoke enough apathy that you would risk letting the only known spark of life go out, then yours is not a sentient species. Yes, you have all the space-time in the world, but those days are certainly numbered!

Those averse to human exploration of space now are not self aware and sentient beings. In fact, were I an alien overseer -- and I am most certainly not admitting that I am -- then based the lack of exploration beyond your magnetosphere over the past 40 years I would recommend we cut our losses and take your species off the endangered sentience list. I imagine -- as a purely hypothetical speculation -- that if humanity did owe an advanced alien race one hell of a tab, and showed no indication of ability to repay it for the infinite future, that one of them might risk violation of technological contamination statutes and propagandize the suggestion for you to get your asses to Mars and colonize it as soon as humanly possible -- which would have been about 67 years ago, if you hadn't wasted so much time murdering yourselves. Even if exposing a clear and troubling picture of humanity's place in the universe were an overt violation of some alien version of your fictional prime directive, it's not like one would not seriously need a permanent vacation after only a few decades of witnessing humanity's failure after mind-blowing failure to commit to ANYTHING resembling progress as a space faring race!

Perhaps one would rethink their benefit package at the last second, and bury their contemptuous assessment in a reply to an AC.

3 days ago

Samsung's Position On Tizen May Hurt Developer Recruitment

VortexCortex Re:But (91 comments)

I like to wear watches. Recently lost my watch, Frownie face. But I don't want to get a new one because I'm holding out for an iwatch later this year. In the meantime, my wrist feels naked! I just hope the iwatch is sub $400.

I hope in the Apple tradition it is $666, and when you lose it like you lost your other ultra-losable hardware you make a Frowine face so hard it freezes that way.

5 days ago

Ask Slashdot: What Tech Products Were Built To Last?

VortexCortex Osborne 1 (692 comments)

I still have a working Osborn 1 and use almost every day. That's over three decades of service. My CP/M 2.2 disks are toast, so I've replaced the OS with one of my own design for use in my hobby home automation projects. The 300 Baud modem died so I use its RS-232 (serial) port with an IR LED and resistor across DTR to do IO with my home theatre system. The IEEE-488 (parallel) port is used for multiple sensor IOs and a sanitized COM link to my Linux server network which can route IR messages around the rest of the home.

It's more of an "antique" retro conversation piece, but I'm a practical guy and find collectables such as this 1st widespread "portable" PC to be far more interesting when in use; Rather than collecting dust and only being the subject of tech war storries others can witness the power of its simplicity and appreciate the workhorse in action. When I press the button on my remote or smart-phone app visitors (esp. kids) heads are turned by the 5 1/4 inch drive access sounds as the proper code translation table is loaded into the 64KB RAM and colored debugging LEDs on its exposed bread boards blink while status messages flicker to life scrolling up the 54x24 character green monochrome display, then lights dim and a projector screen lowers, and various set-top boxes have their inputs configured. Kids will spend hours "watching TV" just changing the channels and active devices while actually paying attention to the old Osborne One doing its duty. I consider it sort of like an 80's version of steam-punk -- My take on "cyber-punk". Sometimes I'll show the older kids how to manually command systems by making and breaking circuits with a paperclip on the breadboard to do IO. The resulting stream of "how"s and "why"s is fully expected; This setup was socially engineered to lead hapless inquisitors away from the mind-numbing TV and out to tinker with the brain boosting electronics and robotics projects in the garage.

I have some replacement parts from its dead brothers and sisters, but it too will eventually bite the dust eventually and be replaced with other hardware. I really miss parallel ports. Even kids can do IO by hand on the old interface instead of running everything through a more complex serialization protocol; Building a USB interface just to get back bit-mapped parallel IO is just silly. Thus, old beige boxes and custom DOS programs are still my favourite for intro to software / hardware & robotics even more so than single board or embedded systems like Raspberry Pi or Arduino and its clunky expansion ports -- for want of a simple Parallel Interface... I mean, you can use a bit or byte pattern of a parallel interface as an "escape code" to signal a mode switch and with a few transistors you can have as many "expansion cards" to program as you want. When I'm teaching how stuff works, I don't want things like this abstracted away and hidden behind proprietary hardware and software interfaces.

Remember the Three R's: Reduce, Reuse and Recycle. Reusing old hardware should be attempted before recycling. Experiencing the magic blue smoke escaping from an old main board, ISA / PCI card, etc. is an important part of learning electronics projects. Having to redo their work teaches folks to be more careful even if the parts are otherwise "worthless junk". Making interesting and/or useful things out of a "Trash 80" is seen by youngsters more impressive than using purpose built devices designed to facilitate the project. If they make it past the Cyber-Junkyard Frankenstein stage Only Then do they move up to working on more expensive single board systems and full featured robotics systems, bypassing the Raspberry Pi and Arduino stage altogether (and foisting some of my old junk into other unsuspecting tinkerers' garages).

The Osborne 1 is great for operating your whole home AV gear. Bugfixing custom hardware and Z80 instructions exercises one's memory and maintains neuro-plasticity -- It can even cause kids to favor educational programming instead of that obnoxious crap on TV nowadays.

5 days ago

Heartbleed Sparks 'Responsible' Disclosure Debate

VortexCortex One Cyberneticist's Ethics (188 comments)

Once again the evil of Information Disparity rares its ugly head. To maximize freedom and equality entities must be able to decide and act by sensing the true state of the universe, thus knowledge should be propagated at maximum speed to all; Any rule to the contrary goes against the nature of the universe itself.

They who seek to manipulate the flow of information wield the oppression of enforced ignorance against others despite their motive for doing so. The delayed disclosure of this bug would not change the required course of action. The keys will need to be replaced anyway. We have no idea whether they were stolen or not. We don't know who else knew about this exploit. Responsible disclosure is essentially lying by omission to the world. That is evil as it stems from the root of all evil: Information Disparity. The sooner one can patch their systems the better. I run my own servers. Responsible disclosure would allow others to become more aware than I am. Why should I trust them not to exploit me if I am their competitors or vocal opponent? No one should decide who should be their equals.

Fools. Don't you see? Responsible disclosure is the first step down a dangerous path whereby freely sharing important information can be outlawed. The next step is legislation to penalize the propagators of "dangerous" information, whatever that means. A few steps later will have "dangerous" software and algorithms outlawed for national security, of course. If you continue down this path soon only certain certified and government approved individuals will be granted license to craft certain kinds of software, and ultimately all computation and information propagation itself will be firmly controlled by the powerful and corrupt. For fear of them taking a mile I would rather not give one inch. Folks are already in jail for changing a munged URL by accident and discovering security flaws. What idiot wants to live in a world where even such "security research" done offline is made illegal? That is where Responsible Disclosure attempts to take us.

Just as I would assume others innocent unless proven guilty of harm to ensure freedom, even though it would mean some crimes will go unpunished: I would accept that some information will make our lives harder, some data may even allow the malicious to have a temporary unfair advantage over us, but the alternative is to simply allow even fewer potentially malicious actors to have an even greater power of unfair advantage over even more of us. I would rather know that my Windows box is vulnerable and possibly put a filter in my IDS than trust Microsoft to fix things, or excuse the NSA's purchasing of black-market exploits without disclosing them to their citizens. I would rather know OpenSSL may leak my information and simply recompile it without the heartbeat option immediately than trust strangers to do what's best for me if they decide to not do something worse.

There is no such thing as unique genius. Einstein, Feynman, and Hawking, did not live in a vacuum; Removed from society all their lives they'd have not made their discoveries. Others invariably picked up from the same available starting points and solve the same problems. Without Edison we would still have electricity and the light bulb. Without Alexander Bell we would have had to wait one hour for the next telephone to enter the patent office. Whomever discovered this bug and came forward has no proof that others did not already know of its existence.

Just like the government fosters secrecy of patent applications and reserves their right to exclusive optioning of newly patented technology, if Google had been required keep the exploit secret except to government agencies we may never have found out about heartbleed in the first place. Our ignorance enforced, we would have no other choice but to keep our systems vulnerable. Anyone who thinks hanging our heads in the noose of responsible disclosure a good idea is a damned fool.

5 days ago

NASA Proposes "Water World" Theory For Origin of Life

VortexCortex Entropy can Increase or Decrease Locally (115 comments)

"Random processes"? Any randomly assembled amino acid randomly disassembles as well; even Miller proved that.

The randomly assembled amino acid does randomly disassemble as well, but that is not what it must do. An amino acid may stay the same, disassemble, or it it may form a more complex molecule.

Here is a little demonstration of "randomly" assembling complexity in behavior. I have given each entity the ability to sense the left and rightness and ahead and behindness of 'energy' dots and their nearest peer. They also get a sense of their relative energy vs their peer. The inputs can affect two thrusters which operate like "tank treads". However, their minds are blank. They don't know what to do with the inputs or how they map to the outputs. The genetic program introduces random errors as copies runs of a genome from one parent then the other switching back and forth randomly. The selection pressure simply favors those with the most energy at the end of each generation by granting a higher chance to breed. Use the up/dn keys to change the sim speed, and click the entities to see a visualization of their simple neural network. The top left two neurons sense nearest food distance, the right two sense nearest entity, middle top is the relative energy difference of nearest peer. Note that randomness is constantly introduced, and yet their behaviors do not revert to randomness or inaction, they converge on a better solution for finding energy in their environment.

There is no pre-programed strategy for survival. Mutations occur randomly, and they are selected against, just as in nature. Given the same starting point In different runs / populations different behaviors for survival will emerge. Some may start spinning and steering incrementally towards the food, others may steer more efficiently after first just moving in a straighter path to cover the most ground (they have no visual or movement penalty for backwards, so backwards movement is 50% likely). As their n.net complexity grows their behaviors will change. Movement will tend towards more efficient methods. Some populations may become more careful instead of faster, some employ a hybrid approach by racing forwards then reversing and steering carefully after the energy/food is passed. Some entities will emerge avoidance of each other to conserve energy. Some populations will bump into each other to share energy among like minded (genetically similar) peers. Some will even switch between these strategies depending on their own energy level.

Where do all these complex behaviors come from? I didn't program them, I didn't even program in that more complex behaviors should be more favorable than less complex, and yet they emerged naturally as a product of the environment due to selection pressure upon it. Just because I can set the axon weights manually and program a behavior favorable for n.nets to solve the problem, doesn't mean randomness can't yield solutions as well. Today we can watch evolution happen right on a computer, or in the laboratory. All of this complexity came from a simple simulation of 32 neurons arranged in a simple single hidden layer neural net, with 5 simple scalar sensors and the minimal 2 movement outputs, with a simple single selection pressure. Each time you run the sim it produces different results, but all meeting the same ends, collect energy, reproduce. Just imagine what nature can do with its far more complex simulation and selection pressures... You don't have to imagine, you can look around and see for yourself.

In other more complex simulations I allow the structure of the n.nets and form of sensors to be randomly introduced and selection pressure applied. In larger simulations I allow the breeding and death of generations to occur continuously across wider areas and speciation will occur. Entities will develop specialized adaptations for a given problem space of the environment. I have created simulations where the OP code program instructions themselves are randomized, and seen program replication emerge. If there is replication and entropy (errors added) competition will begin among the entities for resources in the environment.

I have created other simulations for the evolution of chemical life in a simplified environment with particles that attract or repel, and have various bonding properties. These take even more processor cycles to run, but they demonstrate the emergence of complex behaviors too. First linking up of chains of atoms occurs. The entropy introduced and other parameters determines how long the chains can form. As with the Op-Code VM universe requiring certain types of opcodes to produce life, Certain universes of chemical properties are not conducive to life either. The ones that are will emerge life-like processes spontaneously.

The first atom joining interactions will produce small molecules, thus increasing the complexity of the universe -- this is the meaning of life: It is what life is and does. In my opinion, life is not binary but a scalar quality which happens at many different levels; The degree that something is alive is denoted by the complexity of the interaction, IMO. Out of the millions of simplified chemical universes some will be able to produce molecules, and some can yield chains of molecules. The first chain-producing catalytic molecule, or "protein", will begin to fill the sim with long chains of atoms, and then become extinct as no more free atoms are available, or entropy destroys the chain-making interaction "protein". Some universes that can produce atomic interactions can not produce life. I call this "universal crystallization": If not enough entropy in the universe you don't get complex life, only self assembling crystals. With enough entropy to break the chains down over a period of time, but not too much entropy that it limits the chain lengths too much, chain-making interactions can restart and die out many times. Each micro-genesis tends to occur with greater frequency given the more complex environment their predecessors left behind.

Suddenly an evolutionary leap will emerge: Simple Pattern Matching. For no other reason than it being one out of a sea of random outcomes one of the spontaneous chain making catalyst will produce TWO chains instead of one. Often this happens because of entropy attaching two chain making catalyst interaction "proteins". Because of the propagation of attraction / repulsion, etc. properties in the "protein" molecule the atoms added to one chain will have an immediate effect on the atom or molecule that can be added to the other chain. An interlock is formed, and the complexity of the simulated universe essentially doubles in a small period of time. Sometimes three chains can be produced as well, this may even yield ladder like chains, which are very durable; Though frequently the ladders do not emerge and are not absolutely required for the next phase of complexity to emerge. During the pattern matching phase copying can occur. A chain of molecules may enter one side of a pattern forming "protein", and produce the mate chain. Depending on the chemistry, that mate chain may have one or more additional mate pairs before reproduction of the initial chain Thus the first copy of information is born, and is key to future increases in complexity. Not all chains can make copies, if any input on one side has two or more equally likely pairs it will almost certainly prevent exact copies; Thus molecular pairs instead of atoms are more likely to form life's chemical Op-Code. Depending randomly on the time it takes for the next evolutionary jump to happen it may occur via manufacture of simple or very complex molecular chains.

Some catalytic interactions have an interesting property. Instead of being formed of a chance collection of complex molecules the catalyst will be formed of a chain themselves. Beginning as a molecule chain themselves they may be able to exist in two states: Relaxed or Kinked up. The interaction with a few common atoms or molecules at one or more "active" sites can cause the somewhat relaxed chain to transform in a cascade of kinks into the catalytic protein shape whereby it can manufacture molecular chains. The pattern of the folding is programmed into the molecular chain itself, charge interaction propagates along itself such that stages of the folding action occur in a predictable way (this is still a mysterious mechanism we're trying to solve in real-world biology). Shortly after the emergence of the first such transforming molecular chain, an explosion of evolutionary advances occurs. When the relaxed form of a dual chain-maker kinking "protein" is fed into another dual chain-maker its intermediary form is output. Rarely is this mate-pair form also a transforming kinking protein. It may take one or more additional passes through a duplex pattern matcher to yield the first copy of a copying machine, or it may take a secondary kinking protein that performs the reverse process. "Protein" synthesis is a huge evolutionary leap because immediately thereafter the copy machines reproduce exponentially and begin competing by reproducing the array of slowly mutating intermediary forms. /p>

Mutations to the copy machine or its intermediary form(s) can cause new varied forms of itself to be created. Most of the mutations are fatal, some do no harm by merely adding expense via inert additions, some additions are useful. This is where I see the evolutionary leap due to competition: Armaments and defenses. The inert additions can serve as a shield to prevent vulnerable "protein" machinery from breaking down by preventing a stray atom or molecule from joining or gumming up the works. Some copy-machines will emerge weapons: Long (expensive) tails that detach on contact to clog the gears of other copiers, short barbed kinks with active elements at the tips to break away and attach to competitors. Some may produce reactive molecules that they are immune to, essentially an acid to digest others via. Soon activity is everywhere, the sim will be full of ever more complex interactions. Thus the complexity of the copier universe grows by amazing evolutionary leaps and bounds.

New forms of protein folding emerge which may yield reversible kinking instead of one-way transformations. Sometimes folding is lost and life can go extinct by relying on existing catalytic molecules (given enough time it may form again). Very rarely, instead of long chains self assembling catalysts are formed which manufacture the various smaller parts that then self assemble. However, I have never seen self assembly in this way yield much more complex things without resorting to duplicating molecular chains. The "genetic program" is a powerful evolutionary advantage which seems to be almost a mandatory necessity for complex life. If the chemistry is right sometimes naturally occurring or previously mass produced mated molecule pairs will form chains that begin copying by zipping and unzipping free amino acids given some slight oscillating energy to break apart the chain pairs.

Without any tweaking of parameters, I'll sit back and watch as the simulated universe emerges more and more complexity through competition. Some designs seem naturally more favorable than others. Having only a single intermediary stage instead of two or more is the most common. I think this may be why RNA replicates by a pair strand, and DNA has a double ladder chain to ensure the op-code interlock functions, and yield structural stability. Program hoops seem very advantageous -- A dual chain producing protein may be fed a its intermediary chain that is linked with its unkinked self. The protein will races along the hoop feverishly outputting an unkinked chain copier and a long curved intermediary copy, which can self assemble into more hoops, and kink up to do more copying. The hoop is advantageous to copying random chains since a new copy can begin immediately instead of waiting for a chance strand to happen by (which may not be a competitor's genome). I see the emergence of start and stop signals among the chains so that long intermediary chains or hoops may produce several different molecules of varying complexities.

Many symbioses occur. For the first time a chain destroyer is advantageous to have in one's molecular program to split up chains and/or atomize molecules for re-use. It's optimal for some copiers to use the shorter chains laying around than to create everything from individual atoms, so they may survive only as long as the smaller and simpler / cheaper chain producers also do. Sometimes the intermediary form of two or more complimentary copiers will join and thus carry with it the information for copying all essential copiers and assemblers of an environment. Thus an energy store and release mechanism is heralded by the local increase and decrease in entropy in the making and unmaking of molecular chains, corresponding to heating and cooling the environment which can act as a switch for kinking or relaxing and drive even more complex interactions.

Something like speciation occurs by having various areas of the universe separated by seas of caustic agents or walls of think chain soup, and leaving ecosystems unable to interact. Something like viruses occurs whereby one subsection of the environment replicating various chains will be infected by a drifting intermediary form chain that produces incompatible copiers and/or catalysts that are very destructive and take over the ecosystem. I haven't got the CPU power or time available to run these simplified simulations for billions of years, but there is no doubt in my mind that if I did some would produce more complex creatures by continuing the process of encoding ever more better methods of survival within their self replicating copies.

I have seen simulated life emerge in my computers. For the life of me, I can't figure out how it is any different than organic life. Both require energy and an environment conducive to their survival in order to exist. Thus my definition of life includes cybernetic beings, and my ethics revolves around preserving and increasing the complexity of the universe. I know of more efficient ways to produce intelligence than waiting billions of years in billions of universes with billions of different parameters for emergent intelligences to appear. I can apply the tools of nature directly to the complexity of information itself within the problem spaces I need solved, but just because I can intelligently design a solution doesn't mean that nature could not do the same with her countless dice and enormous trial and error periods.

Emergence of life in water would seem likely given the chemical properties of water and usefulness as both a medium of interaction as well as a source of energy. Water is one of the most abundant molecules (made of hydrogen and oxygen, both very abundant atoms); For the same reason that certain op-codes are required for self-copying programs to form, carbon is a likely atomic building block for life. We know how to create hydrogen atoms from pure energy using very powerful lasers. We know hydrogen fuses into helium in our sun. We may not have been there at the big bang to watch it happen, but all evidence points to its occurrence, and subsequent hydrogen production, and supernovae stars from this to increase the chemical complexity of the universe to include all the atoms we find here on Earth, and which life is made of. We've seen self assembling lipids and amino acids, and it's not a very large leap to consider the latter may take refuge in the former given enough pulses of a the local entropy increase and decrease cycle and a bit of selection pressure.

I am not a god to my cybernetic creations. I am merely a being in a higher reality to them. I see no evidence that indicates a god is required for life to have formed. Complexity emerges as entropy decreases locally, and this universe is expanding. However, even if we met miraculous beings tomorrow with technology far beyond our own and they had powers that seemed as magic, they would be merely aliens, not gods. Even if the universe were a simulation and the administrator logged in having full god-like command of my reality I would not worship them a god: Should Neo worship the Machines of the Matrix? No, worshiping aliens for having more technology is cargo cultism. If reality itself were a thinking entity we would call it a massive alien intelligence and study it with cybernetics, not worship it as a god. The god of a cargo cult does exist, but it is not a god.

The philosophical concept of a much higher intelligence should not be conflated with the term "god". To do so lends the reverence you hold for ancient traditions and their folkloric figures to powerful alien minds. You are espousing enslavement to Stargate Aliens by saying a mind-boggling god created life. Science has made magic into technology. The term "god" is deprecated, and can no longer apply. God is dead. I will not worship aliens, even if they do exist.

These are facts due to thermodynamics verifiable through the mathematics of information theory: Complexity can emerge from simpler states; Complexity can beget more or less complexity, but complexity is not first required to produce complexity. The lack of all complexity would be a hot dense singularity.

5 days ago

MIT Designs Tsunami Proof Floating Nuclear Reactor

VortexCortex Well, duh, anyone with a sim can see that. (217 comments)

Everything I need to know about energy logistics I learned from Sim City 2000.

You put the plants / reactors away from the city, out in the water, so that pollution doesn't bother folks and if there's an explosion, nothing else catches on fire. The cost of maintaining the power lines is far less than additional rebuilding costs after a disaster strikes and the plant blows. I guess next they'll discover it's fucking egregiously foolish to zone schools and residential next to industrial plants. In this case, they didn't even need a sim, they could just read a history book.

about a week ago

Linux Voice is a New Magazine for Linux Users — On Paper (Video)

VortexCortex Re:Marketing geniuses (69 comments)

Paper has infinite resolution

No it doesn't. Ever heard of atoms?

Processor speed has nothing to do with resolution (Planck lengths). Atoms? No, you need to ARM yourself for the future of Linux.

about a week ago

Click Like? You May Have Given Up the Right To Sue

VortexCortex Re:Possibly Worse Than That (216 comments)

Note: When removing my name I changed the above slightly from my own agreement. Change HTPP to HTTP, the former is a completely different protocol for browsing porn...

about a week ago

Click Like? You May Have Given Up the Right To Sue

VortexCortex Re:Possibly Worse Than That (216 comments)

Little did they know that there is a EULA that comes along with my purchase. If they sell me a product, they are agreeing to a long list of provisions which they are free to look up on my Web site.

I did that for HTTP. You'll find our binding agreement in your server logs. In the HTTP user agent header:

(By continuing to transmit information via this TCP connection beyond these HTPP headers you and the business you act in behalf of [hereafter, "you"] agree to grant the user of this connection [hereafter, "me" or "I"] an unlimited, world wide and royalty free copyright for the use and redistribution of said information for any purpose including but not limited to satire or public display, and agree that any portion of an agreement concerning waiving of my legal rights made via this connection is null and void including but not limited to agreements concerning arbitration; By accepting these terms you also acknowledge and agree that these terms supersede any further agreement you or I may enter into via this connection, and that the partial voiding of agreements will be accepted as a contractual exception regardless of statements to the contrary in further terms agreed to by you or I via this connection. If you do not agree to the terms of using this connection you must terminate the connection immediately. If you do not or can not agree to these terms you do not have permission to continue sending information to me via this connection, and continuing your transmission will be in violation of the Computer Fraud and Abuse Act.)

You can add such a clause simply by using any of the various User-Agent switchers for your favorite browser.

about a week ago


VortexCortex hasn't submitted any stories.


VortexCortex has no journal entries.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account