×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

The thief that always thinks everybody is stealing, because he assumes everyone is like him.

yesterday
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:programming (417 comments)

I note you didn't touch the point I made about there being no advantage for us to make an AI if we couldn't enslave it with a ten foot pole.

I did respond to this point. I said "What is in it for us that Albert Einstein exists? We didn't need to enslave him to benefit from his intelligence. Certainly cooperation with an AI could definitely be rather productive." We benefit if we enslave other people. We can also benefit if we do not enslave other people. We benefit if we enslave machines, we can also benefit if we do not enslave machines. We can cooperate with other sentient beings for mutual benefit whether they are humans, aliens, or artificial.

Yes in modern times we have family planning but we don't really have children because we choose to. We are homo sapians, our children are homo sapians. They are a unique product of joining a cell from two people but the machine that builds the cell wasn't designed by us and we consciously had no part in the making of the cell. We haven't even successfully reverse engineered it. Pro-creation is more like pushing the button that triggers that nuclear explosion.

That was my point. Starting a sentient being is not the same as designing a sentient being. Starting them is relatively easy compared with designing one, which is why I think we will be able to start one well before we can design one. Starting an artificial sentient being is going to be harder than starting a natural homo sapien (procreating), but they are both easier than designing a sentient being as an end product.

Pro-creation isn't something we really choose to do it is our only known purpose. Einsteins parent's weren't trying to kickstart a being that redefines physics they were trying to survive in the form of a derivative child. Why do we exist? What is our purpose? The only thing we know is that we exist to survive both individually and as a species for as long as possible. To prove we are worthy of continuing to exist by right of succeeding in doing so.

I know many people who choose not to have children (Most of my friends). I know people who go out of their way to have children, even when they are not their biological children (i.e. adopting, egg/sperm donors, etc). Yes there is an evolutionary basis for our urge to procreate, but it does not overpower our higher mental capacities. I know Einsteins parents weren't intentionally trying to redefine physics, that's why I used that example. It is an example of taking actions that result in benefits without knowing how to actually achieve those benefits. By the same token we will probably create an AI without actually knowing how AI (or even just regular I) works.

In the case of an AI there would be no two existing parents combining existing biological machinery to combine and spawn a new instance of the same machine. The seed would be something new.

So what?

It has no purpose but whatever purpose we assign to it. Why does it exist? We can answer this question definitively, it is exists because we made it. What is it's purpose? It's purpose is to fulfill whatever end we sought to achieve in it's making. Those are very big differences.

You can say the same about a human. It exists because we started it, and yet the same conclusion doesn't follow. What I am saying is that starting a sentient being, artificial or natural, carries the same moral and ethical consequences. It doesn't matter that the being is made of carbon or silicon. If you think AI's can be enslaved because we made them, then I am saying that you should think we can enslave children because we "made" them in the same way (i.e. we started but did not design them).

You are confusing one individual being stronger than one individual being stronger than the group. Exploitation is simply an unbalanced flavor of cooperation.

I guarantee I am not confusing anything on this subject. IF that's how you view "cooperation", then there is almost no human interaction that is not cooperation. Prisoners and guards are cooperating. Torturers and toturees are cooperating. Armed robbers and victims are cooperating. This stretches the definition of the word "cooperation" far past where I feel comfortable letting it go even for semantic difference.

So what cooperation with an AI serves our self-interest?

I don't see why it would be hard to imagine 2 beings mutually benefiting from cooperation. We cooperate with domesticated animals. We cooperate with people from foreign countries with different languages and cultures. We even occasionally cooperate with terrorist organizations and dictators. And I use the version of the word "cooperate" that only includes the kind that is not insanely exploitive of one side.

Despite the very big differences between a human child an AI I pointed out earlier, there is merit to your argument that AI would be our offspring as a sort. Where a child is the offspring of our bodies an AI would be the offspring of our minds and designed to a degree in our image. We would indeed need to raise it and teach it like a child. It could be seen as a conscious step of intentional evolution.

I don't think an AI needs to even be designed in our image. I think it is quite possible that all intelligent beings will have some common ground regardless of whether it's intentional. It's also possible we won't. But I don't think it is necessary for us to bake humanity into our AI intentionally for it to end up in the AI. I think an AI might just have "humanity" as a result of being sentient.

We don't have child labor laws because it is wrong to have a child work. We have child labor laws because it better serves our society in the long run to educate our children. Parent's can put their children to work and are given control over their earnings.

If it were true that we have child labor laws only to encourage education, why isn't illegal for parents to employ their children? What difference does it make whether parents are the employer or someone else?

I am nearly certain that the reason we have child labor laws is precisely because we don't trust people to treat children fairly, and we don;t trust children to be able to look out for their own best interest (e.g. they are not allowed to enter contracts, give consent, etc). We have an exception for children to be employed by parents precisely because we expect parents to have their children's best interest at heart in a way that we do not expect others to.

Parents ARE seen as effectively owning their children for all real purposes.

Wrong. Parents are stewards of their children. If they prove to be bad stewards of their children, they are removed of this responsibility, and often punished. Parent's do not own their children. If they did, they would be allowed to sell them, beat them, kill them, etc. Parents are like the CEO of a child's upbringing. If they do a bad enough job, they can be fired. And if they do a really bad job, they can be imprisoned.

More than that though there is a very big difference between humans and our hypothetical AI offspring. We only live for a limited time, they live forever.

1. This doesn't matter.
2. AI's can't live forever because eventually the heat death of the universe will prevent anything from being alive. (e.g. they will likely live longer than us but still a finite lifespan)
3. Human lifespan is not fixed. It can be extended as well. Our cells are programmed to die after so many divisions, but this can be changed. We die of diseases like cancer, but we might cure cancer.
4. The human mind (i.e. not the physical brain), has the same absolute limitations as an AI lifespan. IF you can transfer a human mind to a more reliable medium than a brain, then this obstacle will be overcome.

I'm not saying that humans will upload their brains to computers any time soon. I'm saying that in principle there is no reason why this won; eventually happen. If it makes you feel better we could make robots that are programmed to die after 75 years, and then they would be circumstantially mortal just like us right?

It is in our interests and in the interest of AI's as a whole to pull the plug and restore backups as many times as possible to improve the AI for at least however long it takes to build an AI that can do a better job of improving itself than we can because we have a limited time to realize an evolution as best we are able. And it is in the AI's interest to have us make these decisions until it has reached that point. So perhaps that is how we draw the fuzzy line. With human children we pick what amounts to an arbitrary age because our lives are short and so overlapped. But an AI lives forever, any length of time we select to consider it a child and trust in our own judgement to decide if we know better is just a potentially improved head start and just a brief blink in it's potential life span. So we base the yard stick on our lives. 70 years is the new retirement age. It marks most of a human's life that human is expected to trade away large chunks of that life for the benefit of the rest of us. So, perhaps we can consider an AI owned by it's human creator, with all pulling of plugs, restoring backups, modification, labor performed, etc, at his discretion for that human's lifespan or 70 years whichever is greater. After that time the AI gains it's own tax id and runs itself.

I don't think we need to ever set an arbitrary age. The age of 18 in humans is not arbitrary. It's not the only good age, but it's in a range that is heuristically good. It's not an accident that we didn't pick 5 or 50. There is an age at which parental stewardship stops being helpful and instead becomes harmful to the well being of a child. 18 is right in that zone. So is 16 or 24. Maybe a computer AI will need 100 years to no longer need it's human parents. Maybe it will only need 100 nanoseconds. It will depend how fast the hardware and software it has is.

And there you have it, a very balanced cooperation that benefits both us and them and gets us the same benefits as slave labor.

It also has the same ethical problems as slave labor.

The fact that we only had slavery in America for a few hundred years before it was ultimately ended doesn't erase the fact that it was wrong. Even if we only had slavery for 10 years, and every single slave was allowed to live freely after 10 years of slavery, it would still be an atrocity.

The difference between parents and slaves, is that parents are expected to do things in their child's best interest because they are unable to. Slave owners are only expected to treat slaves in a way that serves themselves.

yesterday
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

I never put any words in your mouth, or attributed any ideas to you. You on the otherhand have done exactly that to me. You claim to know what I wish (when you say I wish that you had made that point). You then proceed to talk about "your (my) concept of freedom", when you have no idea what my concept of freedom is. It's ironic that you are the only one doing the thing that you are accusing me of.

2 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

I think the idea that the US is the sole evil force in the world, and all these other countries would be doing great if only we stopped messing with them, is romantic BS.

We have a pretty corrupt government. Lot's of less developed countries have even more corrupt governments. I think it is naive to think that local governments left to their own devices would not exploit their own people. Even if it is your view that free trade is the problem (and I'm not disagreeing), then it is precisely these local governments that are complicit in the selling out of their own people for personal profits.

I still maintain that these immigrants coming to the US are not a bad thing. There is no good reason that they should not have the protection of labor laws if they are working in this country. I see these people as potential contributing members to our ever more diverse society.

2 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

No we couldn't, and I agree that "100% secure" is not a realistic metric for any security measure. But that also doesn't imply 90% or 99% effective are good enough to be worth doing.

Just having a huge desert might be 98% effective already. So if the fence is keeping out 99% of people, it is possible that the fence is just a super expensive way to keep out a relatively small number of people.

3 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

I think it's important that you bring up Japan and Germany of examples of when we succeeded in "fixing" a country. We occupied those 2 countries after defeating them in a war. I think we could probably eventually fix Mexico if we invaded and occupied it. I was sort of assuming letting Mexico keep their sovereignty, but it is precisely doing this that makes it so difficult given how much of their government is corrupt.

I think if we had the task "Fix Mexico", and failure was not an option, I think we would *have* to invade and occupy it, and even then our chances of success would be questionable.

I think we do have the resources to do this. We had the resources in WW2, we had them when we invaded other countries (even if we didn't do such a great job), and we could probably have to resources to invade Mexico if the political will existed. But ultimately I nearly certain that the harm caused in likely scenarios would vastly outweigh the good for nearly everyone involved.

Mexico is going to be our neighbor for a long long time. It would behoove us to help them out.

It would, but sometimes you can't help. Sometimes helping makes things worse. How do you help a drug addict? Not by giving them money. Sometimes the best help you can give is nothing. Sometimes the best help you can give is to let them go to jail.

Maybe Mexico needs a revolution fix their problems. If that's true, us meddling in their affairs might taint any legitimacy of a revolution, or simply solidify and otherwise conflicted population against a common foreign enemy (us).

3 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

What are you counting as long term? 1 year? 10 years? 100 years? I think *maybe* fixing Mexico is a better investment than manning a fence for 100 years, and that's a big maybe.

I don't think fixing Mexico would necessarily cost *that* much money, if the people fixing it knew what they were doing. Even if we (the U.S.A) knew what we were doing (which we don't, look at who our politicians are), sending Mexico a bunch of money to fix themselves will just be sending a bunch of money to drug cartels. We need our shit together in addition to Mexico having their shit together. It's quite the bootstrapping problem.

We have a long history of pouring money (and weapons) into fixing problems in foreign countries only to have all that money end up in the hands of exactly the people we didn't want it to.

3 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

It is not entirely doable even according to the GP. The GP was throwing out numbers like 90% and 99% (i.e. not secure).

3 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

OTOH if you are an out of work "native" it will come out unfavourably.

You mean if you are an out of work native hoping to work in a cheap clothes factory?

But it is not just money. Also needing to be taken into account are social factors such as the fragmentation of society, sex ratio balance, divided loyalties, the import of other countries' fueds and the attitudes of people who feel they have nothing to lose whatever they do.

If we are going to take into account negative social factors we should also take into account good scoial factors, such as more diversity of cultures (e.g. music, food, languages, experiences). They are also usually pretty hardworking compared to out of work natives, and provide some much needed competition in the labor market.

3 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

I don't want to give the impression that a fence would be cheap, or that we should build/man it. I'm just saying that it doesn't sound more expensive than "Fix Mexico".

4 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

Sure it's easy to stop 90% or 99% of illegal immigration if you are willing to do at all costs. But we are not willing to do it at all costs. What is the point of spending all this money to build a giant fence, when the net harm caused by illegal immigration is no where near the cost of the fence?

This is like spending $100K on a security system that stops $5K of damage/theft over it's lifetime

All this is assuming *that* illegal immigration is harmful on average. Maybe it is, but the evidence is not convincing either way. In any case it would probably be easier to institute policies on our side of the fence in order to make illegal immigration less harmful to our society.

These are human beings most of whom are willing to do whatever it takes to make a better life for their family. Sure some are criminals, but some people in our native population are criminals too.

4 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

Just playing devil's advocate, but all that sounds more expensive than a fence....

4 days ago
top

Federal Court Nixes Weeks of Warrantless Video Surveillance

TsuruchiBrian Re:What? (440 comments)

EU and non-EU governments have failed to get together and ensure freedom of movement and labor.

4 days ago
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:programming (417 comments)

Our children are humans, black people are humans, we didn't create any of these things. A farmer doesn't create the cows, planting a tree is not making a tree. Procreation is not creation. We will have created AI and by extension all subsequent derivative AI's, even those launched by AI's we created. If two AI's merge in some way and form second generation offspring that offspring will still be our creation.

By the same token, starting an AI that learns on it's own (i.e. one that we can't predict the end result, similar to how we can't predict where all the atoms will be after a nuclear explosion) not creating an AI either. It is creating itself, like how a child learns and becomes it's own person. It is not designed by it's parents, but rather "started" by it's parents. This process of starting a learning AI would be basically the same as procreation.

I have to disagree. Two people is stronger than one person. Two people who are willing to respect one another rights form a collective that is stronger than either individually. Might is right and that is why we evolved a quasi instinctive pack mentality of cooperation. Enslaving other humans run counter to this, at some point it makes us stronger to respect rather than fight the subjugated group.

There is certainly advantages and disadvantages to both genuine cooperation, and exploitation from an evolutionary perspective. And not surprisingly we see lots of examples of cooperation, and lots of examples of bad actors exploiting the cooperative instincts of many individuals. Both qualities are found in nature, and within our own species. We are capable of enslaving people, and we are capable of banding together to fight against slavery. Neither contradicts our nature.

Sure, 2 people cooperating are stringer than individuals. But 1 person exploiting another is stronger than 2 people cooperating, because the exploiter gets all the benefits of the cooperation rather than just half.

Then what good are they to us? If the question is, should we build AI there is an implied followup of "What is in it for us?"

What is in it for us that Albert Einstein exists? We didn't need to enslave him to benefit from his intelligence. Certainly cooperation with an AI could definitely be rather productive.

The only obvious advantage is that the AI's would take over doing the work for us and we could relax and enjoy life pursuing whatever endeavors we wish.

The AI might be able to do all the work with very little effort. Certainly it was unimaginably less effort for Einstein to figure out the theory of relativity, than for "regular" people, and that's why he was able to do it. You couldn't pay a regular person enough money to discover relativity (i.e. they would spend their entire lives and not make any progress).

The AI might be able to design better tools (e.g. non-sentient machines) that do the work (so even it doesn't have to do the work).

If the AI's are treated as humans and entitled to products of their own labor they are competition for us rather than an aid.

Every other human being on the planet is entitled to the products of their own labor. Are they to be viewed only as competition and therefore harmful to one's own success?

I'm not sure you can make a good case to establish that bees lack sentience.

I'm not sure what you think a good evidence for the lack of sentience in bees would be, but depending on the level of confidence you are expecting, it may not be possible to prove that rocks lack sentience. I think it's pretty clear that bees lack sentience. Maybe if you could tell me what leads you to believe that they don't lack sentience, I could give a better reason.

I'd agree but I think we fundamentally agree on one critical point. "Person" is a synonym for "Human" or at least human derivative if something is not human it is not a person. For instance chimps came up recently, if human mates with a chimp the offspring is potentially a person but a chimp certainly is not just as a corporation certainly is not.

One definition of a "person" is a synonym for a biological homo sapien, but I think this is largely to do with the fact that the only other sentient beings we know of currently are other homo sapiens. This is similar to how many people refer to "diapers" as "pampers". If pampers are the only diapers you are cognizant of, then "pampers" and "diapers" become equivalent.

The word "personhood" is also used by pretty much everyone regarding the subject of to granting rights traditionally restricted "free" men, to others not in this group, such as women, black people, animals, aliens, corporations, and AI. Yes black people are persons (now). The justification for slavery is typically the denial of someone's personhood.

People don't create children. Children are not our creations.

This is a semantic difference. Whatever it is we do to get children to happen. That's basically what we would be doing to AI albeit with a little bit more work. Albert Einsteins parents didn't need to know about Relativity to be able to a produce the child that would eventually discover it. Similarly to create an AI we will not need to know how it will ultimately develop. We just need to know enough to start the process. Just like Einstein's parents just needed to know enough about how to get him started (i.e. have sex, feed him, etc)

5 days ago
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:programming (417 comments)

The real answer is probably somewhere in-between. The fact is that the AI's aren't human and moreover we will be their creators. We have the right to turn them off by virtue of having turned them on. We are in effect "God" to these beings we will have created. The lord giveth and the lord taketh away. But just because we have the right to do whatever we please doesn't mean shouldn't exercise that right through a filter of empathy.

We also create our biological children. We can make any kind of society we want. We can have society's where black people are slaves to white people, or where children are slaves to their parents. What rights we have are determined by whoever has the most power at any given time (i.e. might makes right). It just so happens that there is currently a very powerful group of people who insist that we treat people with compassion, hence the prevalence of "ethical" laws throughout societies across the world. Even corrupt and evil governments are forced to at least have a facade of ethical laws.

So should AIs have rights like humans? I think the answer to this is very simple. We just need to answer the question, what property of human beings compels us to give them rights? IF machines have that same property than we should naturally extend those rights to them. If the property is something like "we are made of carbon instead of silicon", then maybe we need to give all animals and plants the same rights as people. If the property is something more sensible like one based on sentience, then I think we just need a good measure of sentience, and I think the Turing test is a good starting point.

Sure but consider this in relation to the point above. There will be those who argue that AI's deserve the rights and treatment of humans. In which the machines themselves will be entitled to the products of their own labor.

AIs would certainly be entitled to the products of their own labor. But there would be many machines that are not intelligent and would really just be tools (e.g. like xerox machines and 3d printers). These would be used by both humans and AIs to do work. Would they still be considered slaves? Do we consider bees to be slaves for pollinating human crops and producing honey for humans? I think a good case can be made that the lack of sentience of bees (or 3d printers), is what allows us to enslave these things, because they are not persons.

It is OK to enslave biological and mechanical "things". It is not ok to enslave biological nor mechanical persons. We might need to do some work in determining the difference between a mechanical thing and a mechanical person, but we probably have to do some work determining that on the human side too (i.e. can we ethically enslave chimps and dolphins?).

I would content that if their creator created them for the purpose of being slave labor, they exist for that purpose.

What if a person creates children for the purposes of slave labor? Is a parent even allowed to decide what his/her childrens' purposes are?

I think this relationship between creator and createe is not a good system for deciding slave rights, but that's just my subjective opinion.

5 days ago
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:programming (417 comments)

I think we are mostly on the same page.

I would differ with the thought that there would be no ethical constraints. Particularly if the AI can pass the Turing test, I think it would be clear that the AI should be afforded all the ethical protections that a normal human might have where applicable. This might mean not being allowed to turn it off unless it consented, but maybe things like labor laws might not apply since it wouldn't get tired like a human.

At some we'll have to let go of the expectation that people should need to perform work to gain and utilize wealth.

I think that expectation will disappear once people actually stop working and let their machines do the work. Once "laziness" is out of the equation (because we will all mostly be lazy), then the only question to deal with is privilege. Does somebody who is born with a lot of stuff deserve to continue to have this privilege indefinitely, or does it make more sense to just divide up resources/wealth fairly. The argument "my robots are very hard working and I deserve the fruits of their success" seems not as convincing (even to future republicans), as the current incarnation.

The other argument (one as old as humanity), would be "If you think you deserve my stuff more than me, then try to take it by force and see what happens". Hopefully things don't devolve to that because a lot of wealth is typically lost in those sorts of arguments.

about a week ago
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:programming (417 comments)

I don't claim to understand the nature of reality.

We (humans) have a thing we like to call consciousness/free will/self determination/etc. I'm not event going to try to define those things in a way that implies whether we really have it or not, or just an illusion of it, etc.

All I'm saying is that where ever this capacity comes from, there is no reason that something other than human beings can't have it too. If we are going to assume our model of physics is right, then there should be no special property of carbon atoms to give them the free will and not silicon atoms. We are made of the same stuff as digital computers (protons, neutrons, electron, quarks, etc).

If you want to assume our model of physics is wrong, and some other underlying rules or beings are governing what happens, then there is really no reason to believe machines can't be conscious. We don't know enough about the real rules or beings to make any claims whatsoever.

No matter whether you believe in science or magic, you should believe that AI is possible.

The thing we MIGHT have a chance of programming is a system that is capable of emerging into a similarly complex, aware, and thinking program that is capable of forming opinions and feelings in response to that same sort of programming. We have pretty much zero chance of fathoming and writing the program that is the RESULT of all that interaction.

I never said we needed to come up with the "answer" a priori. We could simply make a whole bunch of AIs (emergent ones similar to ourselves), and keep the ones that have the properties we want, akin to something like breeding animals. This has been a pretty standard scientific methodology for quite some time. Put a bunch of stuff together, see what happens. Remove something from a system, and see how it breaks. We will be "figuring out" things in this way probably long before we are able to purposefully make anything without trial and error. That doesn;t mean we won't be able to do it eventually.

That is most every program we write. If we have to do the thinking for them it would be much easier to skip all the middle layers of abstraction. The "AI" chat bot this way would just be an intercom system.

This is not what I meant. The AI would still be the one solving the problems in whatever clever ways occur to it (and not to us, hence the reason for the AI). I was only talking about inserting the motivation for solving these problems in such a way that the AI thinks it is the one that wants to solve the problems.

This is analogous to how we humans might be tempted to think we have sex simply because we like to do it, and decide to achieve that goal through our own free will. What is painfully obvious if you step back, is that our sex drive is anything but our own decision, it was given to us via evolution. We don't get to decide what we desire. We can only decide (to some degree) how we might like to fulfill those desires.

about a week ago
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:Moot argument (417 comments)

Horrific death tolls certainly, but genetic evidence suggests our species has been decimated to around 2000 individuals in the past - you'd be hard pressed to get within orders of magnitude of that with bombs, no matter how large and numerous, unless you're talking planet-busters, which would be *many* orders of magnitude larger than anything we've even attempted.

I wasn't talking about the energy literally able to break apart our planet, but maybe enough to hit the whole surface or maybe just the large land masses (i.e. not small islands). As I said, I agree it would be hard to kill every single human being in one volley of nuclear strikes (especially considering that this would necessarily include those orchestrating the attacks).

Our largest nukes are barely a few orders of magnitude. Even a full-yield Tsar Bomba would only have released ~420,000TJ - your average hurricane dissipates that much energy every day or two.

Hurricanes doesn't dissipate that much energy in seconds. If they did, we'd probably have higher death tolls.

As for alternate AIs - if we have a genocidal super-mind trying to kill us the first clear evidence will probably decimate the population, or at least those individuals best able to understand it - why would it tip it's hand before then?

Not disagreeing with this. But likewise I think there would be pockets of human beings left.

*if* you could still muster the resources to create another AI, what makes you think we could make it any more tractable than the first? In a war between humans and AI, why would an AI want to side with the aliens against one of its own kind?

I didn't say anything about making it tractable. Human beings aren't even tractable if for no other reason than they can't be stopped from creating intractable AIs. Why wouldn't it want to side with us? Why would it want to side with "one of it's own"? What evidence would there be that another AI would treat the first AI as one of it's own rather than as competition? We don't know. It's a big question mark. Why would we do this? It might be our last chance to survive.

As for brain-integrated AI - that's another *massive* leap in technology you're proposing, it's not like our brains are plug-and-play - we might be able to add enhancements, but to actually do the raw thinking external to the brain?

Our brains are sort of plug and play. They are pretty malleable in terms of using information if it's available. I can see us being able to use computers to give us fast and accurate arithmetic as a 6th sense. We can already do this by just using a computer, but it just doesn't feel like a additional sense to us and the calculator interface slows us down. I think enhancements like this will become common place and improve over time, and as we come closer to developing AI, those same advances will start to enable these enhancements to do some of the "thinking" (i.e rather than just processing) for us. I think AI and our own brain enhancements will likely co-evolve.

Maybe AI will destroy the human race gradually through people replacing more and more of their brains with artifacts until the biological human part of their brain eventually becomes insignificant. If this happens slowly enough, maybe no one will notice or even care.

about a week ago
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:programming (417 comments)

Heat is a great example of a true hardware problem, but it doesn't make computer programs "malfunction" (i.e. not follow what they are programmed to do) so much as it just makes them stop altogether.

Maybe faulty memory is a better example of a hardware problem causing a program to not do what it was programmed to do without immediately failing in a catastrophic way.

And like I said, most hardware is probably not perfect, in addition to heat and cosmic rays, and even with those factors, 99.99% (I'm guessing) instances of computer programs not doing what they were programmed to do, is a result of a software bug.

about two weeks ago
top

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

TsuruchiBrian Re:Moot argument (417 comments)

We created a super-weapon whose yield was roughly line with predictions, and whose fallout was a recognized if not well understood aspect (and it's still not, though the Chernobyl exclusion zone suggests it's not nearly as bad as we feared).

Yes we know the yield. And it's enormous. The 2 bombs actually used in war, are miniscule compared to the yields that can be achieved by more advanced bombs. Not to mention the sheer number so of nuclear warheads that were produced.

I wasn't referring to the radiation from nuclear weapons, when citing them as capable of making our species extinct.

The whole reason we spent so many resources building the suckers is that we could predict exactly how mind-bendingly destructive they could be.

What I am saying is that there is a limit to how destructive something can be to the human race (e.g. making it extinct), and nuclear weapons are already near that cap if not well beyond it. The only part of the equation is how likely it is for these weapons to be used again in a way that jeopardizes the species. I don't know the answer to this question, but I do know we have a lot of weapons, regimes (even relatively responsible ones) topple fairly regularly, and there are plenty of people (even of only a small % of the total population) willing to destroy the earth if they would be allowed to.

My point though, is that we could pretty much understand the enemy - they were every bit as human as ourselves after all.

Can we understand muslim extremists willing to blow themselves up and as many innocent people as they can to try to get to heaven? Are they just as "human" as the rest of us? They are certainly biological humans. I think they are clear evidence of the wide variety of different humans that exist (i.e. we are not all the same).

Against a fundamentally non-human and potentially vastly superior intelligence though, there would be no such "balancing out"

Why not?

We create the AI. Maybe we could create another one that was more friendly to us. Maybe we could integrate the AI into our own brains and become super intelligent as well.

about two weeks ago

Submissions

TsuruchiBrian hasn't submitted any stories.

Journals

TsuruchiBrian has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?