GC sucks, real programmers can do memory management, blah blah blah. Tell me the last time a programmer made billions because "he could memory manage" and I'll show you plenty of poorly written websites, apps, software, that suck at memory management yet still managed to become popular and used by millions.
The market decided long ago that fewer programmer hours was better than users waiting a few seconds everyday for their device to GC. Users don't exactly like it, but it works, they get their hands on a more than usable product faster.
But back to the article. In the article there's some fancy charts about how iphone 4s only has 512mb of ram. Ok, a mobile device isn't going to run with a swap file because, well, the manufacturer decided to skimp on Flash chip quality so not only does writting to flash suck, but it also runs the risk of forcing the cell to over-provision (meaning shrink in usable capacity). But iphone 4s will be 2 years old in 4 months! 2 years = EOL in the phone world.
How about a more current phone? Ok, the Google LG Nexus 4 which will become 1 year old in 5 months comes with a whopping 2GB of RAM. And its a relatively cheap phone! That's already half of my 2011 Macbook Air's RAM. Prediction? In 4-5 years, mid-range phones will be shipping with 4gb of RAM.
Ok, let's go the other direction. Let's say we all agree and programmers should sit down and memory manage again. Hurray! Problem solved? No. Because programmers are inherently bad at memory management. Memory will leak. Look at some popular web browser named after a type of animal. So instead of your phone pausing for a second to GC, now your app just crashes! AWESOME.
The standard software engineering practice still applies. Design your system. Build your system. Once it is usable and more importantly has a market, then profile for optimization.
Ask Slashdot: Mac To Linux Return Flow?
Do you need scrollbars eating screen real estate when they aren't needed or you aren't scrolling?
Yes you do because on small windows that appear to be simple forms do not imply that scrolling is available when it actually is. On a web browser maybe you could hide them, but otherwise it is confusing to have to learn to "try" to scroll when not every window is scrollable.
As far as other critiques of OSX, I would say Finder and App management (meaning Finder > Applications, Launchpad, the Dock) are also incredibly annoying or confusing. Ever since Microsoft introduced the Start menu, Apple seems to have been butt-hurt and intentionally obfuscates application launching/organization as some sort of silent retaliation against Microsoft's obviously successful usability paradigm. It is so bad that even iOS prefers you to manage your own damn icons. The iOS icon grid was great...for the original iphone. It is a giant mess now.
We Aren't the World: Why Americans Make Bad Study Subjects
I wouldn't say this information is useless though. It just points out how powerful cultural norms are in shaping a society's decisions. That means an alternative strategy would be: change the culture and you can predict the general outcomes.
Ask Slashdot: What Is Your Favorite Monitor For Programming?
My Panasonic 32" HDTV has an option to turn off the overscan so I can use it perfectly as a monitor. It even has an older dsub/vga input so I can hookup older PCs.
The difference is the pixels are less squarish because of a cheaper filter technology (more expensive TVs come with a better filter and result in monitor-like pixels) and the colors are slightly over-saturated. Despite this I am happy with it because I can sit 1 foot further away from the TV, and since I have it mounted into the wall above my desk, I now have a lot more usable desk space. Both of these upsides translate into more productivity and less squinting.
Google Store Sends User Information To App Developers
They aren't - Google Play is the merchant, the developers are the manufacturers.
And you're incorrect. If Google Play was the merchant, then they would collect sales tax on my behalf, but Google has chosen to put this weight on the "manufacturer" therefore I as a developer become a "merchant" and Google Play is nothing more than a distribution mechanism and marketplace. This is why I receive information about customers and their locations so I can correctly compute taxes.
Ask Slashdot: Best Alternative To the Canonical Computer Science Degree?
First off, there's a lot of stupidly bad advice here. The OP states that his intention is to become a web developer and feels his current 2 years of CS is useless. Naturally everyone becomes polarized and offers their bad advice. On the same token, most bad programmers will do the exact same thing when a customer comes to them and asks for a solution to a problem. The correct advice should first identify the problem correctly, then offer the right insight to lead the OP to the correct solution for his unique situation.
here's the deal: I'm in my second year of a computer science degree, and the thought of wasting two more years, getting left in the dust, and becoming irrelevant has me horrified. I want to start my web development career now. Or at least as soon as possible. I can drop out and devote 6 months to teaching myself, but I want something more structured.
Why do you think you'll become irrelevant? Because the technology you learn today will be deprecated for next year's flavor of the month technology? And if that's the case, why do you think learning that particular technology will grant you anymore longevity?
Consider this: technology will always become obsolete. If you accept that, then you will continuously be forced to learn new things regardless of how you learn it.
Secondly, why do you want to start your career right now? Is it out of envy? A feeling of wasting time?
I would be lying if I didn't say I wanted to graduate 4-5 years earlier. After my first year in college, the internet bubble burst. I was entering CS with older peers landing rock star programmer jobs with little effort. And that all quickly changed.
At the same time, the core fundamentals of computer science are allowing me to stay relevant today. I started with the web, went on to complete my 4 year CS degree and now I've been able to learn the Android SDK on my own time without the aid of classes. What little I remember of my advanced game programming course (I didn't stay in it) and linear algebra has allowed me to work at the 2D level canvas without ripping my hair out.
When you understand and correctly apply the theory, you are able to digest much more complicated things much more quickly. You know a bad algorithm when you see it. You know how to correctly optimize rather than wasting your time with trial and error. But if you sit through class thinking "I don't see the direct connection" well consider trying to do algebra without knowing how addition works. That's what you're up against.
Now despite that, that doesn't mean that school is best for everyone. If you feel you are capable of learning things on your own, no matter how complex or how convoluted, then school may actually slow you down. But if you still feel you can't correctly teach yourself, then school is a good option.
If you feel you can go faster, then do so. By that I don't just mean stopping school. You can actually accelerate your school if you have the desire. I had the option of actually graduating a quarter early, but I chose not to in order to explore other topics the school offered (one was the game programming course). Looking back, graduating in 3 years is actually do able with summer courses and maxing your units per semester/quarter.
Finally don't discount the trade-offs. Starting work early is good for those that want to be entrepreneurs. Those that simply want a desk job for the rest of their lives, it is probably a really bad idea. Never again will you be surrounded by people of the exact same age and never again will you have culturally "approved" time to actually just sit down an learn whatever you want. That includes studying abroad on educational loans and studying seemingly useless topics. At my age, people around me find more interest in these sorts of topics than their own specialty simply because they're hired and forced to work on their specialty for at least 8 hours a day. As a student I thought I could sit in front of the computer all day, today I look for things to get me away from the computer.
The Trouble With 4K TV
Exactly. I bought a 32" LCD 1080P TV and mounted it directly into the wall above my desk. The result is the TV no longer takes up desk space meaning I can actually use the desk...as a desk! Other benefits are I can actually sit back in my chair and still see clearly despite being 4-5ft away from the TV.
Now if a 4k 40" or 42" TV came out, I would buy it in a heartbeat. The problem with 32" at 1080P is the pixels are quite large. 4k would solve this and give me a huge increase in usable resolution.
What Are the Unwritten Rules of Deleting Code?
If you're going to do this, YOU HAD BETTER COMMENT ON WHY YOU COMMENTED IT OUT. Too often I come across a block of code that is just commented out. A quick glance means it had a purpose, but for what reason, I'm not sure because apparently it isn't needed anymore. So then I ask "why?" But there is no answer to this question. This is possibly the worst feeling of working with code--the idea that something was there, now is not, but they left it in for some reason and you don't know why.
Now you're left with a couple scenarios:
- The code caused a bug, but had some sort of purpose.
- The code was replaced, and the new implementation is experimental which leads to
- Exactly which lines of code did it replace?
- It is dead code
It is like walking into an offline factory line. Most bystanders will simply think "I had better not touch this or it could cause serious problems" so the only people in full control was the last shift of factory workers for that line. Meanwhile similar lines are still operating so you're not exactly sure why this particular production line was halted.
Meanwhile if the code was in version control, I get a much clearer picture from diff on what exactly changed. If you just comment out a giant block and leave it there, check it in, then diff shows me that 2 lines had a comment added (in the case of block comments) or all the lines are commented out (modified, for single line comments). Now I have to diff again and skip to the version before that to get the picture I want, but now I may see other changes that may not be relevant.
Either way, if you are commenting out code for the first two scenarios I listed, you need to think again about your version control system and manage it better. Experimental or untested stuff should be branched, not visible to other devs otherwise people make mistake it for production code. If I see commented out code blocks, I'm simply going to delete them myself because I can't read your mind and never will be able to.
I'm not sure why this comment is modded so highly. The reason web programming sucked was because we have things called browsers that restricted our capabilities. Add on to that the need for everyone to agree to standards which caused progress in browser technology to slowly adopt newer technology. It is still the same problem even today.
Think about it for a second even if you aren't a web programmer. There was a reason (not just Microsoft) for IE6 adoption: it actually, at one point, implemented more than the competition. Mozilla was around but not nearly as capable. Netscape, too, was around but clearly lacking in the quality and performance of IE at that particular point in time. But everyone would rather believe that IE6 was just flat out terrible throughout its entire existence. To that I ask, would you like to use and program for Netscape 4 or 5 for the same period IE6 was dominant? Obviously not! What we should have received is more intense competition rather than a lengthy side lawsuit about how MS abused its monopoly.
To make a long story short, Netscape was not our savior, Microsoft was still the bad guy (honestly), and the world finally learned that monopolies sucked. The true turning point wasn't until Mozilla came around with Firefox and other companies like Apple and Google began creating their own browsers for the purpose of expanding web technology. It wasn't until Firefox, Safari, and Chrome browsers that we finally exited the lack of improvements in client browser technology.
Any industry stuck in an old way of doing things is going to suck. It doesn't matter if you're the smartest person in the world. If you can only using a hammer, nails, and wood, you're not going to be building skyscrapers quickly or efficiently anytime soon. This is exactly what restricted the web and people that worked for the web.
Now to address some key point presented by the parent:
The entire AJAX and framework of web programming is wrong. It was a quick hack added so that you could make dynamic apps using existing technologies without major changes to clients. But its layered hack upon hack upon hack. We really need to scrap it all and come up with a web application programming stack- a new markup language that's meant to do pixel perfect rendering (HTML is not, but its used that way), an HTTP replacement that's stateful rather than stateless, a cleaner way of sending data back and forth from the server. But if you write on top of an ugly platform, you're going to get ugly code.
There is so much wrong with this claim that I don't know where to begin. Let's start with stateless programming since that's the key theme in this opinion. I would be curious to hear what the parent's idea of "right design" is when a service is expected to provide for millions of requests simultaneously. At some point you're going to be forced to parallelize that work. Congratulations, you've just been forced one step closer to implementing a stateless system. You see, in a stateless system, it is much easier to cut off pieces of work into bite-sized chunks to be handled out of order. When you restrict yourself to state, you're at the mercy of single threaded technology to get your work done. It is 2013, we have CPUs in our pocket phones that have 4 cores. Stateless is here to stay and it will become more pervasive even without any web interaction.
Let's also look at how stateful systems do under latency. Just go play any online game to get a feel for this and login to some far off server that crosses an ocean or two. Your latency will suck, packets will be dropped, and the overall experience sucks. The internet was designed to be world-wide. It was designed to travel great distances where latency and reliability was a real issue. Packets are not guaranteed to arrive. This is why we have TCP and an in-your-face kind of restriction that things can time out. I mean we're sending huge amounts of data around the world for christ-sake. Until we have the capacity to serve everyone's data needs redundantly, and a magical guarantee a packet will arrive around the world safely, I don't see stateful systems implemented safely for world-wide consumption becoming the norm any time soon.
Now we have implemented such systems. But to the user, they appear to be unstable or unreliable. Take for example IRC. Occasionally you will get disconnected from the server, for the same reasons you get kicked out of an online multiplayer game. Somewhere along the path of servers and routers, a failure happened and now your state is worthless and has to be restarted all over again. If this was the mode of programming today people would be writing the same slashdot post about why we can maintain a solid connection for the rest of our lives instead of why web programming sucks.
Finally let's talk about HTML and rendering technologies. Web browsers are going to get their "pixel perfect" rendering engine as 2D canvas. That's an HTML5 spec so that issue will eventually be addressed. Besides that I'm not sure exactly what better option we have other than HTML for non-pixel perfect rendering. If the parent is insistent on 2D Canvas as a replacement for hypertext, I say he can go to hell. That doesn't imply that HTML4 or even HTML5 is perfect, just that it takes care of lots of things people shouldn't need to care about just to relay hypertext-like information. For what it is worth, HTML has gotten us pretty for for all of the caveats it has. I don't know of any other standard that allows you to do what HTML does efficiently.
For example let's take a step back and see what people did prior to HTML to "publish" their opinions. They were called editorials and published in print media like newspapers. You would send a bunch of text called a letter to a newspaper, and if they wanted they'd print it in their editorial section. But they wouldn't copy your letter verbatim as it appeared, that would be stupid. Instead they ran it through their entire typesetting process so it would appear on the paper correctly. That's right, this was a team effort, now replaced by something that people can learn to do alone in a few hours and distributed electronically, world-wide, for free. The result obviously isn't of the same quality, but for our technology, it is more than enough.
World's Longest High-Speed Rail Line Opens In China
It is a success, because it works, and tons of goods and millions of people use it everyday.
The same argument can be used in Europe and Asia. You never refuted the grandparent's claim which is that the interstate highway was funded by government money, not by user money which you used against HSR.
HSR, will be not be, because it is simply too limited.
Europe and most of Asia would disagree.
I can take my car to from Sacramento to LA in about 6 hours, at a cost of (Gas Guzzler) less than $150 in petrol, taking my family (four additional people) as a bonus.
Yet in your entire analysis, you only account for the cost of gasoline. You didn't account for the cost of the roads you would use (they are not free and cost money to maintain or in your words LOSE money). You didn't account for the cost of the vehicle depreciation, license, registration, and maintenance. You also can't sleep and drive at the same time. You're not supposed to eat and drive at the same time. And you're definitely not supposed to drink and drive at the same time.
Meanwhile an elementary school child in Japan can travel at will as long as he has enough money to pay for the fare. The traveling business man can still drag himself onto the train despite having a bit too much to drink. Most of all, each household is perfectly happy with one car, while here in California each adult or older teenager needs their own vehicle.
AND once I get there, I would still need to rent a car.
HSR itself isn't enough, I'll give you that. Intra-city rail and adequate public transit would be necessary. We would also need to improve public transit in major metro areas. LA is already on its way with Measure R.
And further trips, I would simply just take a plane.
So you admit that cars aren't a solution, yet planes aren't a solution either. By that I would argue that the more modes of passenger transportation we have, the better off we are. In Japan the airlines compete with HSR. This directly benefits the traveler--because of the additional competition, fares become cheaper.
HSR is romantic notion for idiots. IT never pans out like the proponents claim.
In Japan, the rail companies are private entities just like airplanes and car manufacturers. They turn a profit. Why? I'll give you a few reasons:
- Japan expressways are all tolled: users must pay a fee to use the system. In America, the interstate is subsidized or socialized--whichever term you prefer.
- Rail companies are able to acquire land and re-purpose it for transportation. In California, the Interstate system was approved prior to the NEPA and CEQA regulations. These environmental regulations delay the building process for any project (including freeways) and make it more expensive. The primary target at the time was freeways due to NIMBYism. Keep in mind that the government at this time was pretty much rolling through people's backyards with freeways and using eminent domain to make it happen.
- In Japan parking is not "free" or socialized. You must pay for your own parking.
- Rail companies in Japan don't just operate trains, they also acquire and redevelop areas near train stations turning them into giant shopping malls or upscale living areas. This means users of the system have access to most retail they'll ever need. Some stations even have integrated retail and dining just like airports--but it works better than airports because of more repeat commuters.
Now in Japan people want to live near a train station because it means convenience. Property prices generally increase the closer they are to a train station--and decrease as you get further from a train station. And people are free to own cars, and drive as much as they would like, yet people choose the trains? Keep in mind that Japan especially during their boom years was not automobile averse. In fact during their boom years, it was common for individuals to purchase a car even though they wouldn't use it. Yet the public transit system and rail network still flourishes.
That's not to say that all cities in Japan are car-free. Many cities in fact require a car or an alternate mode of transit (bicycle). Yet it all works together, trains and HSR included.
Now I'll go ahead and agree with you that perhaps the government shouldn't be managing HSR. The root of the problem, I believe, is the fact that we do not treat modes of transit equally. Automobiles are heavily subsidized through road and interstate construction. So it is like we are encouraging people to drive because we're giving them access to something that would normally be very expensive. The solution is actually to stop subsidizing the interstate, roads, and parking so that other modes of transit can become competitive. But roads (namely city roads) are certainly a public good; a city road can be multipurpose and support not just cars but bicycles and pedestrians. So I'm all for subsidizing city roads. Interstate/freeways and parking however shouldn't. I would cut interstate funding through taxes and switch to a toll based system: all users pay a toll and tolls must support the maintenance of the road. For parking I would do the same: users pay for parking and all municipal minimum parking regulations are revoked. This means it would be possible to build stores with no parking in high density areas rather than paying for an expensive underground garage that would sit unused anyway.
Finally I would change property/land taxes to be separated. Currently you are taxed on the value of the property as a whole. So if you make an improvement to the property like build a skyscraper, you actually end up paying more property taxes because the total value has gone up. What this encourages is for people to purchase land, build a parking lot, and wait for nearby property to increase the value of the area. Then when the increase comes along, they then sell the land back at an increased price. If instead they were taxed relative to the value of the land, they would be encouraged to actually build something of value instead of a vacant parking lot.
Is It Time For the US To Ditch the Dollar Bill?
This is kind of how the Japanese yen works except they haven't phased out there version of a "nickle" and "penny". In Yen, they have 1 yen, 5 yen, 10 yen, 50 yen, 100 yen, and 500 yen coins. The smallest bill is 1000 yen. About 100 yen feels like $1 in their economy.
Japanese people hate the 1yen coins just as much as we hate pennies and there is usually a donation box at the register of big chain stores for you to donate your 1 yen coins. Or you can go to a shrine or temple and get rid of them there. The 5 yen coin, however, seems to symbolize some kind of luck in their culture so they probably will never phase out the 5 yen coin.
I'm all for USA switching to dimes and dollar coins as well as reintroducing the half dollar coin while phasing out $1 bills and all of the existing coins.
But but, what about vending machines! Just give up. One thing I hate about today's American culture is how everyone expects their opinion to be preserved for their own benefit when it is clearly dragging down everyone else.
How We'll Get To 54.5 Mpg By 2025
Numerous studies have shown that traffic jams are simply caused by people following too closely.
Autonomous cars aren't going to magically solve the traffic congestion problem. You can look at it in two ways. If human driven cars are following each other too closely, that means that the road is congested even though speed may be at the specified speed limit. Another way to look at it is that road is already "full". A road that isn't "full" means that each car has sufficient space to account for changes in speed. There's a video of a quick experiment some kids did where they took all four lanes of a high way and drove right next to each other at exactly the speed limit. What followed was traffic. So in a sense, when drivers exceed the speed limit, they are naturally "increasing" the capacity of the road.
Now let's say you do have every car on the road be an autonomous car and the average length of each car is 15ft. That means in 150ft of a single lane of road, that road can fit a maximum of 10 cars side by side. But let's add in a merge in the middle of this 150ft of road. If the cars are riding exactly right next to each other, merging cars will not be able to merge in. The only way a car would be able to merge in would be if cars slowed down and let the car in. If the flow of cars merging in is constant, then that means cars will continuously slow down to let more cars in. Thus you will get traffic.
Even if you stop and say "hey, let's put 15 feet or so of space between each car" well now you've cut the capacity of the road in half, and even if a car merges in during a "full" condition, it will still create traffic because the cars will try to slowly create their 15 feet of buffer space. If the flow of cars coming in on the merge is constant, then all cars in back of the merge will continuously slow down again to maintain their 15ft of buffer space.
Now you say "1 car length is all an autonomous car needs while human drivers need more." I would say that human drivers can get by with close to 1 car length at the risk of increasing accidents. Which is exactly what happens. Go to LA during rush hour and drive along the 405. There won't be much more than 1 car length between cars. The only way you can merge in is to "force" your way in by putting the front of your car in the slight gap so the driver behind you has to slow down and let you in otherwise he will hit you.
It is simple really. If a road can only handle 10 cars per a second, the second you get more than 10 cars per a second, traffic will occur. It doesn't matter if the driver is a computer. We see the same conditions for bandwidth on the internet where everything is controlled by a computer.
Apple CEO Tim Cook Apologizes For Maps App, Recommends Alternatives
Version 1, version 2, version N. It doesn't matter. If you release crap, it should be treated as crap. This is how a market is supposed to work.
Is the Google Nexus Q Subtraction by Subtraction?
For specs it looks like a decent device. For price and features, it is certainly a hard sell with the only compatible devices/media being Google content and specifically movies and music.
It seems like a premature launch. As a developer I don't care much for hackability. Random Joes aren't going to randomly go out and buy this thing for its hackability.
What they should have done is at least provide a developer API. If pandora, netflix, and the rest had access to this thing, I'm sure it would be much more palatable as a viable product.
Also if it had a real "on screen" UI, that would be great too...but I guess they really want you to buy a Nexus 7 first.
Ask Slashdot: Instead of a Laptop, a Tiny Computer and Projector?
We're currently in a "gap" in technology where most of the functions are starting to move to phones yet phones aren't quite powerful enough or usable enough yet.
Right now your best option is the Macbook Air. I own the 11" i5. Buy it and don't look back. It has plenty of power that most netbooks lack and the smallest form factor. Also at ~2lbs, it is as light as you're going to get. The trackpad is also very usable so you don't have to drag the mouse if you don't need it. The keyboard is full size so unlike most netbooks, your hands won't cramp up.
Since I bought it I've sold/gave away pretty much all of my other PCs. It is my primary computer for development now. At home I connect it to a 32" lcd hdtv which is mounted on the wall above my desk.
It also fits into much smaller bags. So you don't need a giant bag.
Analyzing the New MacBook Pro
Don't buy a Mac because you think it has great hardware. If that is your reason for buying a Mac, go buy a PC and turn it into a Hackintosh, it's much cheaper.
I bought a Macbook Air because I liked the hardware. The OS was secondary. By now I've gotten used to OSX, it does the job. I'm not sure it makes me more efficient or anything. As a software person, I do find the unix base quite useful.
With the Air and now this new Macbook Pro, Apple is hitting target markets that no one else has. My requirements were simple:
- Must be extremely light (~2lbs or 1kg).
- Must be thin enough (for me that is less than 1")
- Must be powerful enough (i3 or i5 minimum, Intel Atom does not cut it)
- Must be capable of SSD storage
- Must be 11" screen size, no bigger, no smaller.
- Must have a method of port expansion (usb2 hubs are not good enough).
Given that, I did not buy the first or second iteration of Macbook Airs. I waited until last year's iteration which came with Thunderbolt and i5 CPUs. Prior to that, I can tried a few different PC alternatives. The first 10" Asus netbook, and later Acer 11" notebook with a slightly faster AMD processor. They were both extremely lacking though they were close in the form-factor department. The first computer to pretty much meet my requirements was the Macbook Air 11" with Thunderbolt.
My use case is fairly simple. I needed a computer that is portable yet powerful enough to take with me. This makes it possible for me to bring my work with me and rid myself of all the extra computers that are totally unnecessary in my opinion. I shouldn't need a powerful desktop just to do software development. I don't play games anymore so I don't care about graphical performance. I do care about video acceleration so many netbooks fell flat in this area. The SSD is necessary for fast application startup and file access. You simply do not go back after using an SSD. Finally the 11" screen is both the minimum and maximum size I'm willing to deal with when traveling. It is both small enough to fit in a small backpack yet large enough to do work.
The only things I desire for my Air right now are:
- HiDPI screen
- Thunderbolt dock
If they could also manage to make the charger a little thinner, that would be a bonus.
Other than that, when it comes to PCs there's always something amiss. Either they fumble with the key layout, the trackpad sucks in some way, or they have some kind of build-quality defect. My Air wasn't exactly perfect, but it did manage to get the key points correct. The keyboard is actually spot on (although OSX key bindings take some learning) and the trackpad is good. One thing that does suck about Macs is there's a limited selection of input devices or you need third party software to make it work right. My logitech G5 mouse is evidence of this where I had to use usb overdrive to make it work better.
But it wasn't until Apple made the Airs until everyone else started to copy. I don't care about copies, I do care that other PC manufacturers couldn't figure this out fast enough. When netbooks came out they were happy to just keep making more colors and shinier netbooks. They never thought to put a serious effort into making something thin yet sturdy. They didn't bother to make the hardware fast enough. Most of them pretty much just competed on price. Now Apple's proven people wanted something like the Air, and only now will the other manufacturers follow. That's ridiculous.
The Macbook Air is actually my second Mac. The first was the first generation of Intel Mac Minis. It was a good computer but I didn't see much of the point. It was still a time when things were underpowered and having a small desktop computer didn't have much general purpose. Your comment probably would have been applicable then. But these days the portability requirements are taking over. People are moving back to the city, they're on the go, they want a computer that can match that lifestyle. For that the Air is perfect.
Apple News From WWDC and iPhone 5 Rumors
If you work in programming or anything related to graphical design or the visual arts (video included) I would say yes.
For everyone else, maybe they can get by. The problem with our current displays is text is rendered like crap. The low resolution displays are the entire reason why sans-serif fonts (Arial, helvetica, etc) became popular. In print serifed fonts (Times New Roman) used to be popular because they had more DPI to work with. That meant fine details necessary for the font were actually printed nicely. On a low DPI (less than 100) serifs look like absolute junk. Yet a serif font in print is actually easier to read than a sans-serif font.
So if you read anything on an electronic device, you should want a high DPI display because it will actually be easier on your eyes. Furthermore things will start to scale naturally where as right now they just turn out to be this blurry mess because we need to apply antialiasing magic to make it look right at the expensive of being blurry.
A Day In the Life of a "Booth Babe"
Read the article. The girls know what they're getting into and if they have issues with it, they quit.
But, again. Stupid companies. Stop using booth babes. It makes the industry look adolescent in nature, and is disrespectful to all women, and even more disrespectful to women in tech.
THIS kind of attitude is why many of us geeks can't get a date.. change it!
No, the reason why you can't get a date is because your logic and self-esteem is all wrong.
The industry is merely taking advantage of a weakness in (male) psychology. It is shown over and over again that people do judge a book by it's cover, so a marketing department would be stupid not to accept that fact. They are there to increase ROI, not be "politically correct".
Second of all anyone can get a date with the right kind of attitude. Step 1 is to treat people like people regardless of their shortcomings. You've already judged booth babes based on some slashdot headline and summary so I don't doubt that you'll do the same with others. A common theme in any social interaction is that people don't want to be judged, they want to have conversation. When trust and understanding is established, only then can advice be made and accepted.
Step 2 is to stop succumbing to your own perceived "disadvantages". It is true that some people will never accept you, but their logic is just as shallow as yours at the moment so they are not people you want to interact with anyway. But of the people that are willing to accept you, the idea is not to push them away because of your own shallowness. When you get your head over that, you can begin to have a healthy social experience.
A Day In the Life of a "Booth Babe"
I'm not a dad, but do you realize how hard that kind of thing is? If you want to be logical, the easiest method is to marry the ugliest woman you can find to ensure that your potential daughters have no chance at ever becoming attractive (of course these days they always have the option of plastic surgery).
These days kids have all kinds of external influences parents have no control over. Their friends, the radio/tv/internet, the people they meet everyday. The more restrictive you get the more likely you are to push your kid into making rash decisions simply out of angst. Become too loose and they're more susceptible to "meeting the wrong people". While there is certainly some kind of formula that tends to have success, it isn't full proof. You can find exceptions in nearly all cases.
Why Do Programming Languages Succeed Or Fail?
- Ease of use
- Degree of current use by others
A language generally succeeds if both of those are true. Now ease of use is a moving target; if you're writing system level code, you're not going to want to use a dynamic interpreted language, if you're writing some throw-away script however, dynamic interpreted languages become attractive.
I'm not sure why we even need to ask/answer this question. Languages are just like products of technology. People use them based on their requirements and how popular they are. Popularity is important because if you have a problem, you know that others using the same product may have some experience with your problem so you can seek help/advice.