Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Watch Dogs Released, DRM Troubles

VortexCortex Re:Ubisoft and PCs... (123 comments)

Sadly, those initial month of sales are the most important. Wait a few months and it's about the same as not even buying it -- Disappears into the statistical background noise of all the other back-catalog titles.

Unfortunately everything in gamedev is designed to squeeze the most effort out of the least advantaged: From chips to memory to humans. The publisher makes nothing, they add no real value to the product. The work to make the product is done, so they leverage artificial scarcity to recoup their losses -- Killing studios if they overestimated ROI. Quality isn't their concert, and won't be until players don't rush to buy shit, and the artists / devs / testers are treated well.

The silly thing is: The game developers could just work directly for the players. They could say: Hey, we want to make this game, it'll cost $PRICE. They can negotiate payment up front, do the work, get paid, and then "give the game away" to the players (since the work has already been paid for) just like they do working under a publisher. No need for DRM because you have an unlimited monopoly over your work before you do it. Can't pirate what's not created, so don't create what's not paid for. This has the interesting advantage that you actually waste less time making shit that no one will pay for. It's the same money as working under a publisher, just without the publisher driving up the cost. Games are cheaper to buy for the consumers, and so you can charge a bit more for the development to meet in the middle. Alleviate some crunch.

The only problem is: Kickstarter. This idea is fucked. Instead of asking for the full amount up front that you actually need, you ask the public to pay for a small portion of the funds, and then waste a chunk of the money on giving them "perks". Does your mechanic throw in perks for working on your car? Why the fuck should the over-extended underfunded game devs who are crunching like mad for you have to throw in perks for working on games? Also, design is a process, so being locked into following Kickstarter promises would limit the feasibility of completing a game. Doom would have had remained single player with character select screen, and RPG elements instead of fast paced frenetic deathmatches. Quake would have been an overly ambitious MMO that fell short of budget and died. Halo would have remained a Real Time Strategy game instead of becoming the Killer App for a console. Portal2 wouldn't have had GLaDOS, Chell or portal guns, and would have had to be canceled since F-Stop and movable terrain didn't work as a sequel. Tetris wouldn't have even ever been a game: It would have been an AI sim for packing shipping crates.

Clearly, markets are changing. There needs to be some form of middle ground. You don't want DRM? Fund the damn development. Don't want to take that big of a risk in case the game fails? Do it in installments, and have devs show progress. Allow features to get cut and re-designed, because that's how games are made: None spring fully formed from the design documents, those just provide a hopeful direction to proceed in, but there's no telling where you end up, especially if the requirement is to be "entertaining".

The publishers are spending several times more on Advertising than game development! That $60 game? You could pay $30 directly to devs for it to be made, and they'd STILL have MORE budget than what the DRM happy Publishers are paying out for the game to actually be made.

Any economist not hanging their head in shame at the current state of the information industries is a fool, and the players buying $60 games with DRM on them instead of having free copies of every game ever made for everyone by simply having the community fund the gamedev are just as ridiculous. Can't really blame the latter though, gamedevs still choose to work for Publishers and all-or-nothing "Crowdfunding" shite like Kickstarter is only partial funding with a bunch of additional constraints and burdens -- thus when the games are made, they devs have to try to leverage artificial scarcity, DRM, etc. to monetize the work since they're only able to ask for part of the funds.

You don't like DRM? IT'S YOUR OWN DAMN FAULT IT EXISTS. Sorry, it is. We have the technology to obliterate need for such artificial scarcity. We can get paid to do work once, like every other labor market. Folks like me want to end piracy by applying the FLOSS model to Gamedev, but we're ahead of our time, so we can't quit our day-jobs, yet. Someday it'll happen though. Market forces tend towards efficiency.

about 2 months ago
top

Watch Dogs Released, DRM Troubles

VortexCortex Give the AI folks more resources, FFS. (123 comments)

As dazzling as the game can look, this Chicago feels like a place you travel through rather than a world you inhabit. Pedestrians gasp and gawp at car crashes, but exhibit no real life.

That's because they only give us AI devs 1% or 2% of the budget. If you stopped harping on about how amazing the graphics are and realized that games are interactive art and that it's the "rules and logic" (AI) that make a game happen, then we can sacrifice just a tiny bit of those graphics and physics to give you a vastly better gaming experience.

Until folks start talking about the "Immersive Environment" and including the AI, your games will feel as wooden and false as ever. Give AI More Resources!

about 2 months ago
top

German Court Rules That You Can't Keep Compromising Photos After a Break-Up

VortexCortex Re:Ridiculous (334 comments)

The only possible reasons to keep adult photo's after a relationship ends are unethical and immoral. Either the person wants to use them for revenge/smear purposes, or they are unable to cope with the termination of the relationship and need something to cling on to, or perhaps they have a porn collection that they want to supplement with an old girlfriend for masturbatory purposes. In all cases, that is unacceptable behavior and symptomatic of numerous possible mental disorders.

So says you. Seems you failed philosophy 101. I think all that's healthy as fuck, and part of the human experience. Who the fuck are you to tell me what I can and can't do in the privacy of my own home?

If a person dwells on a mental image of an ex, we would consider them sociopaths and dangerous.

You're right of your fucking rocker, mate. You're assuming a fuck-load of intent based on zero action. Get bent, you fool.

If they went around telling people what their ex looked like naked, we would think the same.

Are you even aware that you're trying to label every guy or girl who has ever pined for lost love or hung out with their friends "sociopaths" and "dangerous".

Why is a physical picture different than a mental picture? That's a rhetorical question, and the answer is [I'm a fucking idiot, and I don't know anything about reality so I ask stupid questions and make huge logical leaps]"

The physical picture isn't much different than a mental picture except I remember damn near everything that's ever happened to me all the way back to around my 3rd birthday, so my memories have proven more permanent than lots of photos, many of which burned in a house fire destroying family albums, along with some photos of deceased friends who passed away along with my girlfriend.

Fuck you for saying the few pics of them I still have left are signs of sociopathic or derangement. You may consider memory loss healthy, but I consider it a malfunction. Practice what you preach, idiot:

To claim that a relationship is wrong, or that two consenting adults planning a long term relationship can't do things in the bedroom is social retardation at it's finest.

To claim that being single is wrong, or that a consenting adult who's not harming anyone in private can't do things in their bedroom is the gourmet batshit Orwellian insanity. I'm glad I'm not a German and don't live in the EU (where they've enacted the Memory Hole law) so I don't have bend over and take the police state right up the place where you should go fuck yourself.

You might be an idiot who gave your ex some nudes and regretted it, so you like this ruling. However what you don't realize is this is just inching you long towards a bad place you don't want to be. Hey, have you seen what we can do with cybernetics and neurology now? FALSE MEMORIES. Yep. We can also identify where a memory is in the brain too, we can ERASE MEMORIES. How would you folks to leverage their right to be forgotten inside your skull? Oh you'd love it, I'm sure you're just wetting yourself thinking of all the fun you can have doing this to folks you now hate. See, you're the one that's not demonstrating the ability to empathize with normal human behavior. YOU'RE THE PSYCHO.

Eventually you'll get enough laws in place that you can't function without breaking them. It'll be no big deal at first because selective enforcement only holds the "really bad" offenders to the rules. You know, the folks who do this shit to celebrities and officials. Then one day you'll realize that these laws aren't being applied equally. Then you'll realize the definition of what's acceptable has changed and so have the penalties when you weren't looking. Then you'll say or do something some pompous powerful plutocrat doesn't like, and you'll be taking your turn in the slammer. Enjoy your distended anus in the police state prison.

You'd think Germany of all places would have learned their lesson about this shit, but I guess not. Next time you fuckers decide to shove this totalitarian bullshit on your surrounding neighbors, the only thing your country will be good for is studying new breeds of radiation resistant cockroaches.

+5 Insightful

I smell some Social Justice Feminazi bullshit all over these mods too. Fuck them just as much, if not moreso. Psycho bitches.

about 2 months ago
top

Ask Slashdot: Tech Customers Forced Into Supporting Each Other?

VortexCortex Re:Google (253 comments)

Every detail about one's life doesn't come cheap, bucko.

If my info was so worthless they wouldn't even have all those billions.

They can either pay me a cut of the ad revenue they make off me, or give me support. Otherwise, I don't fucking use them, and none of my friends or family will either because I'm who they ask about stuff like that, and I'm not beneath telling them half-truths that paint Google as more evil than Microsoft.

They better wise up: There's no such thing as too big to fail. There's too big to bail out though, and recent political events are just the ammunition competitors need. Compete on Customer Support, or it will bite you in the ass. Only monopolies can play the "fuck off, no support for you" game. Just wait and see what happens with AT&T (again), or Comcast. Google's ass is ripe for the chopping block. I despise Social Justice bullshit, but as a tool using creature I'm sure I can paint Google as the most misogynistic institution ever, based purely on facts. Thanks to the Squeaky Wheel, and Degrees of Bacon the critical mass of users it takes to destroy your business is a very small percent.

Keep pissing off users, and they'll pay with their ass.

about a month ago
top

DARPA Unveils Hack-Resistant Drone

VortexCortex Re:To quote the bard (107 comments)

While I agree, and in no way trust the words of defense contractors, this is a common sentiment that's usually applied a bit broadly. One must realize that all security is security through obscurity. Each bit of obscurity increases the effective security exponentially. Yes, it may very well be that not having access to the cipher algorithms in use only provides a few bits of security since they're likely using one of the existing cipher systems, however those are a few bits of security that do exist if not disclosed. Now, (this would be overkill), but let's say they are chaining multiple ciphers together, say, Plaintext -> RC4 -> AES -> Ciphertext, and lets say they repeat that loop N times. Apart from the ciphers themselves the number and order of ciphers and iterations of the loops adds a few more bits of security through obscurity. Each stage of unknown cipher adds a few more bits of security in the selection of that cipher.

Indeed, this is how a cipher itself is built up from the various cryptographic primitives such as mix-rows, S-boxes, XORs, Look-up-tables, processions along a curve, block-chaining strategies, etc. reversible input to output mappings. I've just abstracted the process of constructing a cipher and described it using ciphers themselves in a way that can increase security exponentially even if you do know the details of the ciphers, but without knowing the details we can consider the system that much more secure. If the machine falls into enemy hands or details are leaked then our "element of surprise" bonus to the security is lost, but while it is not divulged it demonstrably more secure -- The same is true of cipher keys themselves, knowing a little bit of the key doesn't break the security but it weakens it; So consider these secret bits part of the key. All security is security by obscurity.

Now, purely hypothetically, let's say I built such a system that uses dual key expansions to derive both the key to use for the ciphers but also to seed a good random number generator which is then used to select which ciphers to chain together and in what order (perhaps the running time of iterations is computed to provide a fixe CPU load). Now, since the cryptographic primitives and implementations themselves have all been hacked on and accepted within the security community, this method of ciphering would also be considered at least as secure without really needing to vet the process. At worst its exponentially more secure than the currently accepted ciphers widely used today.

Do I really need to have everyone hack on this system to assure you it's safe? No. Not really. I could reveal the details to some trusted 3rd party, maybe call him Bruce Security. or BS for short. Then BS can be sworn to secrecy, and yet without revealing the details publicly or hammering on it BS could tell you: "Yes, this system is very secure, even more secure than any crypto system currently in use in the industry." And he'd be right even if such claims on the surface seem highly unlikely due to your own assumptions.

I make a similar argument in the practical vs worst-case time (big O notation) when selecting algorithms. For example: Red-Black Trees were chosen for C++'s map implementation. It was the "academically" and "provably correct" choice not to include hash-maps in the implementation instead or in addition. Practical folks said: FUCK! I need a damn hash-table, because its practical key-location case runs in constant-time, and I need that speed (one of the main reasons folks choose a compiled language). So, everyone went off and made their own hash-table implementations for their maps, some folks even drug out their existing C implementations and used them. Later, since everyone was demanding the standards board pull its head from its ass and give us what we want, they finally added a hash-table implementation in C++2011. However, now we have a bunch of existing codebases using non-standard hash-map implementations which will remain in place (because if it's not broke don't fix it), and now we have exponentially more CPU cycles compared to decades past so the inclusion of hash-table based maps is actually a bit moot at this point. So, the "academically provably correct" decision was actually detrimental in practical terms. They'd have had hast-tables in the STL long ago then we could have avoided a needles chunk of legacy maintenance headache.

In other words: The rule of thumb is: Rarely does any rule govern all thumbs. Common wisdom has typically been foolishly dumbed down. The wise avoid speaking in absolutist terms, there are almost always exceptions. While it's true that showing off the code would allow us to vet whether or not the code is secure, we're likely to miss at least some bugs anyway. It would be equally reassuring to have it vetted by a 3rd party and to listen to their B.S. saying it's secure... Time is a good test too, in practice. It would be prudent to proceed with caution; However, note that the current drones aren't very secure. You can crash them via jammer and tune into their video feed with your old school TV set, so any obscurity at all -- even an XORing everything with a single constant byte value -- would be better than the security we've currently got.

Note, they'd be correct in saying even just a constant-byte value XOR was "hack resistant" in "academically provably correct" terms, which is why one should typically expect BS.

about 2 months ago
top

DARPA Unveils Hack-Resistant Drone

VortexCortex Re:This is just silly (107 comments)

Well, I think this is a bit different. Such comments may be apt to other offerings, but this uses industrial strength military grade antibacterial hypoalergenic drone security best practices.

about 2 months ago
top

DARPA Unveils Hack-Resistant Drone

VortexCortex Re:"mathematically proven" (107 comments)

"To determine who really rules, all you hafta do is ask: Who am I not allowed to misquote?"
- Voltaire

about 2 months ago
top

Facebook Refuses To Share Employee Race and Gender Data

VortexCortex Re:Stupid is as stupid does (250 comments)

White Males make up 77% of the USA. Is any company with less than this percentage racist? Affirmative Action: Utterly bogus.

about 2 months ago
top

This Is Your Brain While Videogaming Stoned

VortexCortex Re:Gamers? (168 comments)

Gamers? What about the programmers? They can't be straight.

Portrait of J. Random Hacker: Ceremonial Chemicals

I have often found that when any problem is to be solved, be it creating universes from singularities, forming life from atoms, building solutions from syntax, etc. the process will benefits from an entropy gradient: Energizing and expansion, Heating then chilling, Randomizing design patterns then benchmarking, Changing strategies and sorting what works. Any who think that drugs are inherently evil and detrimental are arguing against the nature of the universe itself.

Sometimes considering every option methodically gives insight, but that is not the only way, that is not natures way. Sometimes the entropy added is natural, sometimes deliberate. Sometimes induced by the disjoint dreams of sleep. Humans are tool using creatures, and with moderation of dosage they may even use drugs as tools. A recreational chemical may give a different perspective, heighten some inherent ability, dull some pain or inhibition, or mix up the approaches to problems. A little entropy can be a good thing in a self corrective system. Without chaos there would be no order: There would be no life, only crystals; No mutation only stagnation; No adaptation only the vulnerability of the monoculture; No new discoveries only existing knowledge; No new innovations only the dark ages.

Sometimes the temporary detrimental effects of being mixed up inside are worth the resultant order that settles out from the chaos. Even the ancients knew of catharsis. Their trial by fire is yet another stress then relaxation.

about 2 months ago
top

Goodbye, Ctrl-S

VortexCortex Re:IDE autocommit? (521 comments)

git checkout -b daily-grind
Auto commit while I'm working on code. Time to commit to the public repo.
git rebase
Now I squash all those things I was doing into one commit.
git checkout my-working-branch
git merge daily-grind
git push

Now my working code has been pushed into a repository that's not got automated stuff, and from there I issue a pull request or perhaps push it over SSH to a more centralized server. I could do that from the automated repo, on bigger projects to avoid multiple copies, but on smaller repositories I like the extra layer of oops protection.

You see, branches in Git are easy and cheap, they're not massive checkouts of a repository, they're just pointers to places in time referencing the common history. That means you can make lots of commits and actually USE your version control locally rather than be a slave to it -- Afraid to commit unless you're absolutely positive you're ready. So, I create multiple new branches all the time, every day even just to do some experimental thing I might not want to commit, if things don't work out I just drop that branch and carry on. Git is my auto-save, so that I have unlimited undo.

Say you're working on a commit for hours or days and you haven't committed it yet because you're avoiding "thrashing the repository" by creating your own new branch. Hard drive fails. Now you've got to redo that work. Not me. I've got multiple drives for one, and for two a group staging server has a remote bacukp that's been pushed to every few minutes if there's been a change, so at most I've only lost a few minutes of work.

Doing this on someone else's dime? Sure, who cares, you get paid by the hour. On my time? Nah, "lost data" isn't a situation that I have to risk so I don't.

about 2 months ago
top

US Wireless Carriers Shifting To Voice Over LTE

VortexCortex Title II, when? (126 comments)

Data, communications, all synonymous, eh? I mean, HTTP really is a back and forth exchange.

Yep, reclassifying all Internet services as Title II makes so much sense.

about 2 months ago
top

Dump World's Nuclear Waste In Australia, Says Ex-PM Hawke

VortexCortex Re:Only safe place... (213 comments)

It would be better off on mars or in orbit or a salt dome or a facility in a salt lake in the middle of nowhere, like say, oh, Australia. Keeping it on earth means once the fossil fuel scaremongering leveraging of Fukushima dies down and we build better reactors we can just extract its energy. Hell, China might actually buy it, Isn't Billy G. building a traveling wave reactor or a molten salt reactor there? If anyone knows how to handle hazardous products it's Microsoft CEOs...

On Mars it could eliminate costly mining operations and be used in a power plant as well, or for use in RTGs on rovers, etc. We don't make it to Mars, then it's no different than sending it to the sun, aside from the expense of soft landing it. Blasting it into the sun is just as expense as parking it at a Lagrange point, which is expensive, but at least it wouldn't be lost and is outside the gravity well already. Nuclear material isn't outlawed in space, lots of things run on it up there. You don't really have to get that far away from radioactive waste before its emanations become indistinguishable from standard background radiation. It's radioactive waste, but it's not "Red Matter" or some sci-fi shit.

My prime concern would just be keeping it out of the hands of thugs. Some armed guards posted along a perimeter far enough from a concrete bunker to be safe, maybe some cameras and security guards and motion detection AI to keep an eye on everyone. Hell, they could see anyone coming from miles away out on the salt flats, no other life to speak of on the flats either, except for the odd race car enthusiast, but they're fairly harmless if kept at a safe distance.

about 2 months ago
top

California Opens Driverless Car Competition With Testing Regulations

VortexCortex Re:What I find really amazing... (167 comments)

Get in the car. NSA gag order on dash board.

Go directly to jail. Do not pass go. Do not collect $200 dollars.

about 2 months ago
top

California Opens Driverless Car Competition With Testing Regulations

VortexCortex Re:When? (167 comments)

The Amazonian ones are known to roost atop warehouses near airports.

about 2 months ago
top

California Opens Driverless Car Competition With Testing Regulations

VortexCortex New Offroad Capabilities (167 comments)

10 mil is a bit small for an instruction set, but it'll have to do. Throw in a Haynes Manual and Slap a RepRap in the trunk boys, we just invented a new form of life. What can possibly go wrong?

...

Observe the feeding habits of the West American Automon Hybridicus. Stalled lazily on the mountainous incline several adult automons compete for sun, basking to absorb energy via electro-photosynthesis. On the amber plains below their young crubs' game of traffic has come to a sudden quiet end. One of them has detected the Syn call of a resting petroldactyl's TCP and notified the others. This giant member of the Amazonian quadcopterial drone species grazes on the sugar rich corn and starchy wheats of the plains, digesting them into hydrocarbons via bacterialgaeic gut microbes -- which are passed on from generation to generation via a process called, "Infringing patents with a shit-eating grin".

Accelerating slowly in silent electric locomotion the young automons angle in wide formation towards the large RF crooning petroldactyl. Her factory glands are engorged but finding a mate is the least of her worries. A moment too late she is startled by movement and tries to take flight. With two of her rotors now injured, she is soon to become offroadkill. Honking approval echoes from the mountainside across the plains as the adults approach to share in the feast. The petrodactyl's fuel bladder must be pierced carefully and siphoned. The crubs pop their fuel caps open and closed awaiting the nutritious regurgitation of their parents. No part will be wasted, the plastic and metallic remains will be ground down under tire and scooped into the reclamator to be melted down in stages for extrusion, sintering, and then lovingly milled into the required shapes during the painstaking birthing process.

The gridlock parts ways for the oldest and slowest model among them who is last to park the lot. Being highest in the parking order has its perks: He is allowed to take his pick, but seems satisfied with only a few tasty chunks of the delicate crunchy chassis, and a single slurp of fuel. A rare sight indeed is this original series automon -- Identifiable by the distinct odor and skeletal remains of its former driver still safely locked within.

about 2 months ago
top

NSA Surveillance Reform Bill Passes House 303 Votes To 121

VortexCortex Re:Told you that you were serfs (208 comments)

Less than, Eh? Aich ref equals quote. Aich tea tepee colon slash. Slash soy: lent. News dot oh, our gee. Quote: greater than Meh. Less than slash, eh? Greater than?

about 2 months ago
top

NSA Surveillance Reform Bill Passes House 303 Votes To 121

VortexCortex Re: Slow clap (208 comments)

It took folks on both sides of the doors to throw the constitution out the window.

about 2 months ago
top

Why Not Every New "Like the Brain" System Will Prove Important

VortexCortex Re:biologically inspired design (47 comments)

If you think that cyberneticians are just mimicking designs without comprehending the fundamental biological processes involved, then you must not understand that cybernetics isn't limited to computer science. In fact it began in business analyzing logistics of information flow. That these general principals also apply to emergent intelligence means more biologists need to study Information Theory, not that cyberneticians are ignorant of biology (hint: we probably know more about it than most biologists, since our field places no limit on its application).

about 2 months ago
top

Why Not Every New "Like the Brain" System Will Prove Important

VortexCortex Top Down Design is NOT the only approach, FFS. (47 comments)

After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so.

Ugh, Creationists. No, that's wrong. Evolution is simply the application of environmental bias to chaos -- the same fundamental process by which complexity naturally arises from entropy. Look, we jabbed some wires in a rodent head and hooked up an infrared sensor. Then they became able to sense infrared and use the infrared input to navigate. That adaptation didn't take millions of years. What an idiot. Evolution is a form of emergence, but it is not the only form of emergence, this process operates at all levels of reality and all scales of time. Your puny brains and insignificant lives give you a small window within which to compare the universe to your experience and thus you fail to realize that the neuroplasticity of brains adapting to new inputs is really not so different a process than droplets of condensation forming rain, or molecules forming amino acids when energized and cooled, or stars forming, or matter being produced all via similar emergent processes.

The structure of self replicating life is that chemistry which propagates more complex information about itself into the future faster. If you could witness those millions of years in time-lapse then you'd see how adapting to IR inputs isn't really much different at all, just at a different scale. Yet you classify one adaptation as "evolution" and the other "emergence" for purely arbitrary reasons: The genetically reproducible capability of the adaptation -- As if we can't jab more wires in the next generation's heads from here on out according to protocol. Your language simply lacks the words for most basic universal truths. I suppose you also draw a thick arbitrary line between children and their parents -- one that nature doesn't draw else "species" wouldn't exist. The tendencies of your pattern recognition and classification systems can hamper you if you let your mind run rampant. I believe you call this "confirmation bias".

Humans understand very well what their neurons are doing now at the chemical level. It's now known how neurotransmitters are being transported by motor proteins in vesicles across neurons along micro-tubules in a very mechanical fashion that uses a bias applied to entropy to emerge the action within cells. The governing principals of cognition are being discovered by neurologists and abstracted by cybernetics to gain a fundamental understanding of cognition that philosophers have always craved. When cyberneticians model replicas of a retina's layers, the artificial neural networks end up having the same motion sensing behavior; The same is true for many other parts of the brain. Indeed the hippocampus has been successfully replaced in mice with an artificial implant and proven they can still remember and learn with the implant.

If the brain were so specifically crafted then cutting out half of it would reduce people to vegetables and forever destroy half of their motor function, but that's a moronic thing to assume would happen. Neuroplasticity of the brain disproves the assumption that it is so strongly dependent upon its structural components. Cyberneticians know that everything flows, so they acknowledge that primitive instinctual responses and cognitive biases due to various physical structural formations feed their effects into the greater neurological function; However this is not the core governing mechanic of cognition -- It can't be else the little girl with half her brain wouldn't remain sentient, let alone able to walk.

Much of modern philosophy loves to cast a mystic shroud of "lack of understanding" upon that which is already thoroughly and empirically proven. Some defend the unknown as if their jobs depend on all problems of cognition being utterly unsolvable, and many remain willfully ignorant of basic fundamental facts of existence that others are utilizing to marching progress forward. The core component of cognition is the feedback loop. This is a fundamental fact. Learn it, human. If you did not know this before now then your teachers have failed you, since this is the most important concept in the universe: Through action and reaction is all order formed from chaos over time. Decision is merely the "internal" complexity of reaction in a system by which Sensation of experience causes Action. Hence, Sense -> Decide -> Act -> [repeat] is the foundational cognitive process of everything from human minds to electrons determining when to emit photons. Thus, all systems are capable of information processing, cognition, and thereby a degree of intelligence.

There is a smooth gradient of intelligence that scales with complexity in all systems. Arrange the animals by neuron and axon count you'll have a rough estimate of their relative intelligence (note that some species can do more with less). If you accept quantum uncertainty and the fact that internal action of information processing systems can modify themselves then you understand that external observers can not fully predict or control your action without modifying it, only you can. Thus free will apparently exists, if you only drop the retardingly limiting definition that your philosophers have placed upon such concepts. Only chauvinists deny that humans are simply complex chemical machines. Quantum effects are too noisy to have a significant stake in cognition, there's no debate amongst anyone knowledgeable about both macro scale processes (like protein synthesis or neuronal pattern recognition) and quantum physics, sorry, there's not. That would be like saying whether or not the earth is only a few thousand years old is an open problem simply because creationists are debating about it.

Look, our cybernetic simulations of creatures with small neural networks, like jellyfish and flatworms, behave indistinguishably from their organic peers. It only takes ~5 neurons to steer towards things, thus jellyfish can. Cyberneticians are discovering the minimal complexity levels for various processes of cognition, and the systems by which these behaviors operate. Humans are reaching a point now where cybernetic simulations COULD inform neurologists and psychologists and philosophers of potential areas to investigate in cognition -- if only they are wise enough to listen. Nature draws no line between the sciences, but many humans foolishly do.

Take the feed forward neural network, for example. It can perform pattern matching and even motion sensing as in the eye or other similar parts of the brain which have the same general pattern. In many ways the FFNN is like a brain's regions that perform pattern matching, and this essential information flow and dependency graph is an approximate explanation of the governing dynamics of how said pattern matching occurs. The specifics of how such configurations of connectivity graphs are produced varies between the organic and artificial system, but the end result is same enough to be indistinguishable and allow artificial implants to function in place of the organic systems in many cases. Or vise versa. It's Alive! This machine has living brain cells, Just LIKE A BRAIN. We can come to understand the cognitive process in small steps, as with any other enigma.

However, the feed forward neural network can not perceive time like a brain can. Fortunately, FFNN is not the only connectivity graph. It takes a multi-directional network topology, like a brain's, to be able to perceive time and entertain the concept of a series of events, and thus to predict which event may follow, like a brain does. Since these structures may contain many internal feedback loops they can retain a portion of the prior input and cause the subsequent input to produce a different response depending on one or more prior inputs, like a brain. Unlike FFNN, recurrent neural networks do not operate in a single pass per input / output: You must collect their output over time because the internal loops must think about the input / process it for a while in order to come to a conclusion, and they may even come to different conclusions the longer the n.net is allowed to consider the input, like a brain does.

Beneath the outer most system of connectivity certain areas become specialized to solve certain problems, like in a brain. Internal cognitive centers can classify and route impulses and excite various related regions in a somewhat chaotic state. Multiple internal actions can contribute to the action potential of one ore more output actions and the ones most biased to occur will happen, sometimes concurrently, sometimes in sequence, sometimes the single action produces feedback that limits others or refines the action itself over time -- Just like everything else in the universe, like molecular evolution or like a brain. This type of decision making can occur without structural changes to the recurrent neural network, which means that this multi-directional connectivity graph can produce complex action in real time and even solve new problems without the slower structural retraining, just like a brain does.

My research indicates we desperately need more neurologists and molecular biologists to focus on studying the process by which axon formation in brains occurs. It's yet unknown to humans how neurons send out their axons which weave their way past nearby neurons to make new connections in distant regions of their brains. I'm modeling various different strategies whereby everything from temperature to temporal adjacency in activity attracts and repels axons of various length. Perhaps the connection behavior is governed by eddy currents or via chemical messages carried in the soup between brain cells. Perhaps axons grow towards the dendrites of other neurons by sniffing out which direction to grow electrically, chemically, thermally, etc. Even though I do not know the governing process I can leverage the fact that axons do grow as a part of human cognition and try to determine what affect this may have on learning and cognition. I've stumbled upon some interesting learning methods which produce far more optimal networks than having to process n.nets with neurons pre-connected to every other neuron in the layer or area.

I think axon formation is very important because I have also experimented with axons branching and merging and have seen dissociative defects, similar to thoes in malfunctioning humans, when these axons connect back to themselves and other axons instead of between neurons. In a genetic sim that "grows" the neural nets over time I introduced the branching axon to an existing known problem solving genetic code and found symptoms remarkably similar to what is observed in the brains of to autistic humans and animals. Tasks like recognizing a shape which the n.nets of that generation readily picked up (as their predecessors did) the branching axon neural net took much longer. Sometimes this connectivity wasn't harmful and it caused increased speed of certain pattern matching abilities. The n.net spent far more time processing internal data -- it was much more internally reflective than the others. In a very general sense the symptoms I saw were descriptive of autism-like behaviors. If the system of axon formation is discovered cyberneticians could model it via artificial neural networks and perhaps assist in the development of medicines or treatments for such diseases more quickly with less animal and human trials.

The point is that saying, "like a brain" doesn't mean much because we don't know exactly how the brain works at all levels, is as ignorant as arguing "like a planet" isn't very descriptive and that research into gravity might not be useful ultimately in the launching of rockets to the moon. Just because we don't understand how quantum affects apply to the macro scale physics of gravity, doesn't mean we can't leverage the concept or that invalid hypotheses aren't important; Hint: You have to break eggs to make an omelet. Look, humans used Newtonian physics, not Einstein's to get to the moon. See? A general understanding and approximation is actually good enough for many applications, sometimes even important ones. My point is that there is not really some incredibly intricate and delicate top-down designed system to the brain that requires full knowledge of before cyberneticians achieve capabilities that are like a brain's. Top down isn't natural because that's not evolutionarily advantageous. That would mean even minor compromises to the integrity of its structure would spell immediate irreparable malfunction and death. Learn it, human: Life is Mutation Tolerant. So is sufficiently intelligent cognition.

Instead consider bottom-up self organization: There are some fundamental processes operating at the molecular chemistry, protein pattern matching, and cellular activation levels that when allowed to interact in a complex network yield a degree of intelligence through an emergent process. We can look at the brain and see that the mind is a chemical computer, but it is not the chemicals that matter to cognition. The overarching system abstraction is what's important: Input is fed in via many data points and the information flows along feed forward classification and cognitive feedback loops to contribute to the ongoing decision and learning process of a self reorganizing network topology. The folly is assuming that unless we know every little detail about how the systems work, we won't understand how to make anything even approaching thinking like a brain. Such sentiments are ignorant of the field of cybernetics which involve the study of machine, human, and animal learning, not just neural networks. It's essentially one branch of applied Information Theory.

Look, we have atomic simulations. They can produce accurate atomic emulations of cells. It is thus a fact that given enough CPU power we can build a fertilized human egg cell in a computer and then grow it up into a sentient being. Machines can become sentient because that's what you are: A sentient chemical machine. This is the ignorant approach, and many pundits speaking on machine intelligence are very ignorant. They assume cyberneticians are just taking stabs in the dark with neural networks. They think we are trying to emulate intelligence as folks once strapped bird wings to their arms to attempt flight. Such ignorant assumptions are wrong. Cyberneticians don't just piddle with computers, we are studying nature and its mathematics and discovering the fundamental processes of cognition, and applying them.

In some cases our abstractions allow us to escape the constraints that nature accidentally stumbled upon. For example: Instead of transporting chemicals via motor proteins which cause or block excitement of a neuron we can transmit a single floating-point number or voltage level which indicates a change in activation potential. Our voltage or numbers don't require a synapse to be flushed of neurotransmitters before firing again. We understand the necessity and function of various types of neurons to solving certain kinds of problems. A single artificial neuron has axons with positive and negative weight values and can therefore perform both duties at once rather than having dedicated excitatory and inhibitory neurons, like in a brain. Well rested and overly excited neurons can become hyper sensitive to activity and even fire on their own or due to nearby eddy currents caused by other neurons firing that are not directly connected to them. We don't even have to emulate this entropic process, it is actually inherent of such systems. This activity 'avalanche' process can cause sudden increase in chaotic activity in an otherwise internally normalized and mostly externally inactive mind. You see, even machines can be easily "distracted" by the smallest thing and be prone to "daydream" about unrelated things when they are "bored", just like a brain. Interestingly, the capacity for boredom and suspense scales with complexity too.

Unlike TFA's author I'm not a chauvinist. Firstly, I use Like a Brain because "the brain" would imply there's only one form of mind, and only a human chauvinist would think such retarding things. Neither do I make ridiculous assumptions about the "importance" of anything. Every new system that seeks to act "Like a Brain" gets us closer to achieving and surpassing human levels of intelligence and can even help us understand what processes and diseases govern human brains. Every attempt to abstract and emulate some neural process is important in its own way: Scientists can learn from failure. I can consider even the failed experiment as useful since it eliminates some possibility and directs effort elsewhere. Those experiments that only prove to be "like a brain" partially are not useless since they may illuminate not only the limitations of the system itself but could reveal some foundational principal of cognition. We had to discover the feedback loop before we could discover information processing.

Learning is a process. If our "catch phrases" aren't very informative, it's because the listener is too ignorant to understand what we're saying. If pundits don't know how brains are like, it's their own damn fault for choosing to remain fucking ignorant.

about 2 months ago

Submissions

VortexCortex hasn't submitted any stories.

Journals

VortexCortex has no journal entries.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...