Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



Visual Studio 2015 Supports CLANG and Android (Emulator Included)

maccodemonkey Re: Embrace has started (192 comments)

The iOS support I've seen so far requires you rewrite any API facing code in the Cocoa APIs. You'll get to do it in C# instead of Swift or Obj-C, but you do have to rewrite.

Not that I'm complaining. I'd hate to see all the Java style train wrecks that would come to the platform from developers blindly hitting recompile buttons.

about a week ago

The Subtle Developer Exodus From the Mac App Store

maccodemonkey Re:Not mysterious. Just lousy. (229 comments)

The sad thing is I really like the OS, and I'd be happy to develop for it if they made development accessible and quit leaving trails of unfixed bugs behind them.

How exactly is developed not accessible?

- Apps do not have to be distributed through the Mac App Store.
- Xcode is provided for free along with all documentation. There are tons of other IDEs and languages as well.
- Yes, there are bugs, but all platforms have bugs. Surely as an OS X user you can see bugs as well.

I'm not sure what you're looking for to make development more accessible.

about a month ago

Building Apps In Swift With Storyboards

maccodemonkey Swift != Interface Builder (69 comments)

"But is Swift really so easy (or at least as easy as anything else in a developer's workflow)? This new walkthrough of Interface Builder (via Dice) shows that it's indeed simple to build an app with these custom tools... so long as the app itself is simple."

What? Seriously Slashdot, if you're going to have Apple articles, at least have submitters that have half a clue what they're talking about. "How good is Swift? Let's find out by using Interface Builder which is not Swift at all!"

Swift and Interface Builder can be used together, but they're not strongly related components. They're related like a WYSIWYG web tool, like Dreamweaver, and JavaScript. They're both helpful to get what you need done, but they don't replace each other. To give you an idea of how true that is, Interface Builder first shipped in 1986. Now, it's advanced a lot since then, but it's almost 20 years older than Swift, so obviously it's had a long life away from Swift.

Duh you can't create a big complicated app with only Interface Builder like you can't create a big fancy web app with just the visual components of Dreamweaver. You've got get down and actually write some code, which you know, is what Swift is, Swift being a coding language and all. So I find it really odd that this post is talking about reviewing a programming language in the context of trying to use a completely different tool that is not that programming language.

about 2 months ago

Ask Slashdot: Swift Or Objective-C As New iOS Developer's 1st Language?

maccodemonkey Re:Obj-C (316 comments)

Automatic reference counting means adding retain and release messages automatically; there most certainly is a runtime hit, and that's on top of the usual memory allocator costs, which can be quite high. A good compiler can eliminate some of those retain/release calls.

Sure, but I would assume any manual memory management system worth it's salt is doing retain/release. Most of the C++ big libraries do it.

Furthermore, because of deallocation cascades, a release message in such schemes can have a very high latency (don't know whether Apple tried to add workarounds).

Two things:
- Deallocation cascades are inherent in memory management. Neither reference-countless memory management nor garbage collection can avoid it. So I'm not sure what the point here is.
- As far as latency on dispatches, Apple is using tagged pointers ( which is using some room in the pointer for holding the reference count. In practice, this means means there is practically no latency for messing with the retain count.

And, of course, ARC has the same problems with circular references that regular reference counting has.

Which is also a problem with Garbage Collection...

Reference counting is a mediocre memory management scheme at best; people use it in C-like languages because they don't have a choice. It is inferior in just about every way (runtime overhead, latency, memory utilization) to a good garbage collector.

I don't see how it is at all. Results are instant. Overhead is far less. Pretty much every claim here needs citation. It's hard to see how an entire process constantly analyzing object references is less overhead than pre-handling those references. Memory utilization? I have to burn a bunch of memory on a garbage collection process, and then watch my memory climb and then drop off a cliff constantly while I wait for the garbage collection to run, while retain counted count generally stays with a pretty stable allocation count because it's instantly computed. Puh-lease.

I saw your claim below that GC is identical to retain/release in behavior. It really is not. If I clear out a reference in Obj-C or any other retain/release language, the memory is instantly freed. In Java, I have to wait for another run of the garbage collector, which can take a while unless I manually trigger it. Yes, YOU don't have to wait for anything in the code. But ignorance is bliss.

Here's basically the comparison I'd use: Retain/release is like having an incinerator you throw your garbage into. Garbage collection is like... well... having a garbage truck. In both cases when I'm writing code I can just throw things away in my trash bin and pretend it's not there. With retain/release/an incinerator, the memory is actually gone as soon as I throw it in the trash bin. With garbage collection, I've thrown it in the bin and forgotten about it, but that doesn't change that the trash will continue piling up until the trash guy comes, which may still be a bit away. The large amount of piled up trash is also a problem, and actually makes the cascade problem you were concerned about with retain/release WORSE. Instead of a few cascade relationships being dealloc'd at once, all the dereferenced objects are going to pile up, wait for the garbage collector, and then the garbage collector is going to have to pour through thousands of relationships all in one go, causing the program to come to a halt while everything waits for the garbage collector to catch up. I've had to clean up a few Java messes that had that problem by manually firing the garbage collection to spread out the load.

about 2 months ago

Ask Slashdot: Swift Or Objective-C As New iOS Developer's 1st Language?

maccodemonkey Re:Obj-C (316 comments)

CLR in this context means a very large standardized library, which is not subject to fragmentation nor availability. It runs or it doesn't, and it behaves as documented (by google or stack overflow, not necessarily MSDN).

That's not what the term CLR actually means ( nor does that necessarily apply to Swift. Swift does indeed have a slightly larger core function base than Obj-C, but still not enough to build an entire app. For example, there is no I/O (either file or console, except for printing to console), networking, or GUI support that is part of the core implementation. You won't be building much with the Swift core library by itself.

Here's a list of every single function present in the Swift standard library in June. That every single function can be listed on a web page should tell you that it's nowhere near as expansive as the Java or .Net standard libraries:

It's primary API intended use API is Cocoa, but that is entirely un-attached from the language. As of posting time, Cocoa is still entirely written in Obj-C, so the primary library intended for use by Swift is not even written in Swift itself. But I digress, Swift itself definitely does not have a very large standard library. When Swift is ported to other platforms you won't see Cocoa anywhere with it. And Cocoa is different on iOS and Mac, so even if you're sticking to Apple platforms you don't have a common library between the platforms. So right there, even if we decide Cocoa could be called Swift's standard library (which it isn't) it fails the fragmentation test you've put forward.

Not to mention, the R in CLR stands for runtime, and we're talking about the Swift standard library, not the runtime (which is the Obj-C runtime, not the Swift standard library anyway.)

about 2 months ago

Ask Slashdot: Swift Or Objective-C As New iOS Developer's 1st Language?

maccodemonkey Re:Obj-C (316 comments)

It was my understanding that if you want "complete" control, you still need to use ObjC, and that Swift was for dashboards, things previously known as WebApps, and other lightweight situations where you aren't actually doing anything novel, just packaging an interface to a datastore or moving sprites around.

That said, Swift is just as good on inheritance as ObjC, and does garbage collection correctly (benefits of a CLR).

ObjC has been tuned to OS X/iOS, and if you write in ObjC, you should be able to make a single back end that's easily portable to OS X as well as iOS; Swift would be more for iOS only.

I really do like the real-time iteration available in Swift though.

That said, my opinion must be crap, because I'm older than Java too :D I still like Pascal and Common LISP, but wouldn't write a modern application in them (flashback to writing Avara mods in the 90's using ClarisWorks). Most stuff I write these days is in C or Python.

Oooof. So much wrong in a single post. Let's review....

Swift definitely does not do garbage collection. Obj-C actually had a garbage collector for a while (Swift never has) but it was optional, and support for it has ended.

What Obj-C has now is something called ARC (Automatic Reference Counting). At compile time (not run time) the compiler does a static analysis of the code and determines where it needs to add memory management code, and then quietly does so for you. This means there is no run time hit, and behind the scenes everything is still manual memory management. Sometimes you still need to hint to the compiler what to do (usually when trading pointers with C), but 99.99% of the time it just works.

Swift is built on the same runtime as Obj-C, so it inherits ARC. With Obj-C, you can turn ARC off and continue writing manual code, and I'm not sure if Swift allows the same, but it's the exact same behavior. Swift uses the same manual memory management functions as Obj-C in the background, while in the foreground the developer still writes without memory management. I'm not sure what this "benefits of a CLR" is you're talking about, as that's a term usually associated specifically with the Common Language Runtime of the Microsoft language family, but it's neither here nor there. Swift does not run in a VM (it's natively compiled just like C or Obj-C), and it does not have a garbage collector. (But the compiler will add your memory management code for you.)

As far as Swift being multi platform, Swift most definitely for sure runs on OS X, so the language choice has absolutely no bearing on what platforms you want to port between. I have a partially Swift project going on the Mac right now. Swift is definitely not iOS only. Beyond that, it looks like Apple will be working to open source much of it and move it to other platforms.

I'm not sure what this business is about Swift being for lightweight solutions. It runs on the same runtime as Obj-C, it's starting to be as fast as Obj-C, and it interoperates with any Obj-C code (as Obj-C will interoperate with any Swift code). Apple has never messaged that it's for lightweight apps, and developers aren't treating it that way. I still prefer Obj-C, but I'm not sure what that bit is about at all.

about 2 months ago

Ask Slashdot: Is iOS 8 a Pig?

maccodemonkey iOS 8 compatible apps not related (504 comments)

The iOS 8 app upgrades are pretty much for things like being able to target new/any screen sizes. If you're on an existing device, that doesn't mean much. I don't think there is anything in the new SDK that would imply a performance decline in apps that adopt it.

The X.0.0 upgrades are pretty well known for including slower/unoptimized drivers and code paths. Apple is usually in a hurry to get the release out the door and they don't do all the optimizations they should. Usually by X.0.1 or X.1 they get things cleaned up. So it doesn't surprise me that 8.0 is a little pokey. 7.0 had basically the same issues.

about 2 months ago

Android Apps Now Unofficially Able To Run On Any Major Desktop OS

maccodemonkey Re:Please make this thing useful for development (101 comments)

Don't forget the "nearly every platform" comment from TFA. Apps aren't currently designed for use with a mouse, but it doesn't have to stay that way. The Android app format is coming close to being the fabled "universal binary", finally giving developers the long-promised write once, run anywhere ability.

Heh. The dream of the 90s is alive on Slashdot.

It wouldn't be the first. Java and HTML/JavaScript long beat Android to the punch. In fact, HTML/JavaScript does it better. OpenGL ES on Android isn't exactly platform neutral (my Mac doesn't have an ES driver for it's Nvidia/Intel hardware so the best it can do is software rendering, while WebGL is abstracted so it can render it perfectly.)

We can use the lessons from it's forebearers to tell why it won't be adopted in the marketplace as a universal app solution. Both Java and HTML/CSS make universal app deployment technically a reality. For the past 20-ish years I've been able to write a Java app and deploy it on any platform. HTML/CSS run well on both desktop mobile devices as well.

The usability problem that is always run into is that by pretending all platforms are the same, the usability strengths of each platform are ignored. A mouse and pointer is a really really basic example that both iOS and Android can handle, but what about security models? The Android security model, OS X security model, iOS security model, and Windows security models are entirely different. Apple platforms like to give capability access capability by capability, at the time they are accessed. Android doesn't work like that at all, it wants everything up front. So an Android app trying to access my Address Book doesn't at all have the API to do so on my Mac.

Or what about contextual menus? I expect those on a Mac but Android doesn't have them. Macs also draw differently. They expect scrollable content to slow under window sidebars and titlebars. Android doesn't expect that. You can't make an Android app act like it's running natively on a Mac without reflowing all the widgets in the window. And Android apps don't have multiple windows. I expect that on a Mac. Mac applications also have toolbars (as do Windows applications) but Android doesn't even have an API for that. All Mac applications have a re-arrangable toolbar, but Windows doesn't. Mac and Windows computers can have multiple GPUs, which means that Android would need an API to handle a window having to shuffle from one GPU context to another, and I don't think it has that... There are also font layout issues. Both Mac and Windows have different default fonts which could dramatically shift around line spacing, and what text fits where. Mac at least also has contextual definitions when you right click on a word. Will Android apps have that? My Mac apps support QuickLook in the Finder, but there isn't anything like QuickLook under Android to abstract into. I also like searching with Spotlight, but Android apps don't have any Spotlight vendors. Do Android apps ask for my user name and password to do secured operations? Again, Android apps don't have any idea of on demand security, and I really don't want to have to enter my admin username/password every time I launch an Android app. Same thing would apply to UAC.

If you hadn't stopped reading by now, you might be starting to get my point. The reason Java failed to take the desktop world by storm is that not all desktops are the same or even have the same capabilities. Yes, as you suggested, you can go down the road of adding a bunch of APIs to handle all these different scenarios. But then you're back to writing a bunch of code to support a bunch of different platforms. It's right back where you started. Java didn't end up saving time for multi-platform because the dream of writing once and running anywhere was unobtainable for desktop GUI applications, and it still is for the same reasons. It's technically possible, but the same user experience everywhere was unacceptable to users and unworkable. Even Microsoft wasn't crazy enough to believe they could get away with write once, run anywhere for Office. Office on the Mac runs/looks different than the PC version because when they tried to make the Mac version look like the Windows version Mac users revolted. I've had Windows Office users equally pitch a fit when they've been set up with the Mac version of Office.

HTML/JavaScript sidestepped this by defining very broad APIs and letting the platform make inferences as to how things should work. It was also shaped by the communal cooperation of all the different platform vendors to build something that could be molded to work on everything. Android's API does not leave much room for the native platform to reinterpret application logic at all, and it wasn't written by the platform vendors together.

It wouldn't surprise me if Google added this capability to Chrome. It would surprise me if there was any wide uptake.

about 2 months ago

Tim Cook Says Apple Can't Read Users' Emails, That iCloud Wasn't Hacked

maccodemonkey Re:Is this technically impossible - no. (191 comments)

This works because iMessages are stored on your device, and not the server. So when you change your password, and update your devices password's the iMessages will re-transmit their history to other devices. So no, not wrong.

If you pull all of your devices offline and reset them, and then take them back online, the history won't be available to sync so all your messages will be gone. Apple does manage delivery, but the initial handshake is done by a peer to peer key exchange, so while Apple is caching and flinging data, they don't sit in the middle of the key exchange, so they can't read messages.

Email is another matter. The nature of how email works means they probably have some sort of access.

All the complaints about how buggy iMessages is make sense when you look at all the mechanics that they go through to keep messages secure.

about 2 months ago

AT&T Says 10Mbps Is Too Fast For "Broadband," 4Mbps Is Enough

maccodemonkey Re:Ask anyone still on Dial Up (533 comments)

4mbps is 100 times faster than dialup, if not more because where I can usually get the full speed of my broadband connection, I almost never got the full speed of dialup, usually around 33kbps. What took a week to download on dialup takes 1 hour on 4 mbps.

You're right. It's faster than dial up. That STILL doesn't make it broadband. The definition of broadband is not 'It's faster than dial up."

If we're still calling the 100 mbps cable connection I have now broadband 30 years from now because it's faster than dial up... Well that's just going to be stupid.

about 2 months ago

AT&T Says 10Mbps Is Too Fast For "Broadband," 4Mbps Is Enough

maccodemonkey Re:Ask anyone still on Dial Up (533 comments)

Give anyone 4 mbps connection who is living in an area that still has dialup as their only option, and ask them if its broadband. If someone works to bring 4/1 mbps connections to more areas, they should be able to advertise it as broadband.

That's like saying I should be able to advertise my bicycle as a car if I'm selling it in an area that is still using horses.

about 2 months ago

News Aggregator Fark Adds Misogyny Ban

maccodemonkey Re:Sigh (748 comments)

It's not even that. He can still hate people. He just can't ACT on it.

I'm not sure someone in that position could actually promise that, and I understand why the board would be uncomfortable with that.

His job entailed using his judgement to guide a company. Whatever he promises, his biases are part of that opinion. If the board doesn't like the sort of judgement he'd exercise in running the company, they're free to boot him. Especially if there was a risk that he might treat homosexual employees unfairly, which both opens the company to lawsuits, and could keep away good talent.

This reminds me of what the CEO of Urban Airship said: "Sure, I visit swingers clubs and sexually assaulted my girlfriend, but that's entirely separate from my work life." When you run a company, your personality and views are entirely relevant to your work life because they affect your judgement, which affects the company.

"My personal and work lives are entirely separate and I won't let me ideas from one affect the other" is totally a bogus excuse. You can't tell me that someone who does not like gay people at home is suddenly going to come to work, turn that entirely off, and then treat gay people like total equals and not discriminate in any fashion. Sorry, I don't buy it.

about 3 months ago

The Biggest iPhone Security Risk Could Be Connecting One To a Computer

maccodemonkey Re:Minor detail glossed over in the headline (72 comments)

No. The phone should display a notification if an application is side loaded over USB. It shouldn't be possible to install an application without the user's knowledge. Trusting the connection should merely allow the phone and the computer to communicate. It should not allow remote control of the device.

It DOES display a notification when a computer attempts to establish a link, along with requiring user confirmation.

about 3 months ago

Google, Linaro Develop Custom Android Edition For Project Ara

maccodemonkey Re:im happy google took this on (46 comments)

Just think different a little bit. Integrate the secure enclave into the button/sensor module.

I don't think that would work.

I'm pretty sure the secure enclave has authorization hooks to the hardware decryption on the CPU. Even if you moved the hardware encryption/decryption to the thumbprint reader, this brings up another problem with Ara... If you change the CPU or your hardware encryption module, do you loose your data if it was encrypted with the old key?

about 4 months ago

Is the App Store Broken?

maccodemonkey Re:People expecting their marketing for free (258 comments)

Too many people want to get rich by selling apps and expect Apple to pay for the marketing of their apps for free on the App Store.

I don't think this is quite what people are expecting. Rather, the problem is Apple directly prohibits most ways that an app can be promoted. Want to do a demo? No great way to do it in the app store. A trial? Forbidden. Want to offer a download directly from the developer? Nope.

So really what developers are requesting is simple: If Apple wants to directly hand hold the distribution and retail channel of an application, they either need to improve visibility for applications within that retail channel, or give developers more flexibility in how they can market applications. Apple isn't entirely responsible, but because they want developers to be so reliant on their store front, the argument is that Apple needs to actually provide a good store front to make that trade off worth it.

It would be like if you struck a deal with Target where they had full control over how your product was sold and exclusive rights to sell it, and then they stuck it in a dark corner of their store and never sold a single unit.

about 4 months ago

Is the App Store Broken?

maccodemonkey Re:Too many apps, too much appcrap (258 comments)

Question for you, as someone who has developed a mobile app:

How much harder is it to optimize a mobile version of the webpage vs writing an app from scratch and getting it approved for App Store release?

Mobile developer here who has done hybrid apps, Android apps, iOS apps, web apps, etc.

It's hard.

Web apps do not get the native scrolling mechanism, so scrolling feels very funky in web apps. Web app developers write their own inertial scrolling mechanisms to try to deal with it, but web apps always feel wrong as a result.

You also don't get access to a lot of native functions. No barcode scanning. No access to the user's preloaded Facebook account (with authorization, of course.)

There is another problem in that, especially on Android, web technologies are just badly supported. It's getting better in more recent versions of Android where Chrome is actually the engine used end to end by everyone, but earlier versions still on Google's old ass version of WebKit blew chunks.

Loading can be a problem as well. Real apps by definition cache a certain amount of code and resources on the device. A web page has to fetch all resources from start to finish. So while a real app has it's loading UI cached on device, and can display it right away when the user taps a link, a web page has to go fetch a UI over the network to display a loading UI for the operation the web app is about to do over the network. Gross.

The other really messy thing is a real app is pretty easily able to figure out what kind of device it's on and render content accordingly. Web apps can kind of guess what type of display/device they are running on, but again, it can be messy. Especially with new things coming like Adaptive UI/multi windowing coming on iOS where your window or screen size may have no real connection to what kind of device you're running on. Web pages at this point basically assuming they're always rendering full screen on mobile, and do their layout computations based on that, but that looks like it will change on future iOS and Android devices.

You also have a problem with native widgets. If I code a real iOS app, if I run it on iOS 6, it looks like iOS 6. If I run it on iOS 7, it looks like iOS 7. I don't have to create new assets, the app automatically ingests the correct look from the widget set built into the OS. With a web page, I get the "joy" of building my widget set from scratch, and trying to make it at least resemble the system UI widgets the user has been trained to use. And better yet, if I make my web app look like an iOS app, I suddenly have a bunch of Android users unhappy my web app looks like an Android app.

Finally, web apps don't offer any way to be embedded as extensions on iOS, or activities on Android. You can kind of fake it with some really really ugly URL handling handshaking, but this is really problem prone.

TL; DR: Mobile web frameworks/browsers are still immature, and don't offer basically mobile specific functionality that's needed to do apps well. It's not a problem of it being hard to do a web app just as good as a native app, it's a problem of it being impossible because the feature sets just aren't there.

about 4 months ago

Ask Slashdot: Where Can I Find Resources On Programming For Palm OS 5?

maccodemonkey Re:Don't repeat yourself in a multilingual project (170 comments)

Online games won't play unless at the latest patch level, for example.

Because the user is using the application during a 2-hour period of having no access to the Internet.

These are mutually exclusive. Online games stop you from doing client side things because they online. An offline application can't know that validation has changed or there is an app update because it's offline. At that point, what do you do, toss out any data the user entered while they were offline?

One easy fix (again): Do your validation on the server end only. Save the data locally, and when the user goes to submit it and it fails, then you throw an error. User doesn't lose any data, and your validation will always be good.

So your suggested workflow is just to let the user enter grossly invalid data for two hours then have the server present pages of error messages once a connection is reestablished.

As I noted above, there isn't really a way around this. Even if I follow your approach, when the client and server versions mismatch because the user was offline they'll get the same pages of errors. An offline user can't get a client update to fix the client side validation because again (drumroll) they're offline.

And, when they come back online, and they get the automatic update, they now have a local user database chock full of invalid data according to local validation. Do you just toss out all that data now because it no longer meets local validation? Or are you intentionally going to punch holes through local validation to grandfather now broken data in?

Boy, I hope your QA team has a large alcohol budget and the world's largest whiteboard for their validation testing matrix.

Hence the growth of Node.

It's true Node is growing, but again, data validation is usually either trivial enough it can be done on the client end in any language, or complicated enough you probably don't want to be doing it on the client end any way.

about 4 months ago

Ask Slashdot: Where Can I Find Resources On Programming For Palm OS 5?

maccodemonkey Re:Don't repeat yourself in a multilingual project (170 comments)

I don't usually see server architectures and client architectures sharing too much in the way of logic code

Input validation logic and any logic related to offline use needs to be the same (or at least provably identically behaving) on server and client.

I don't buy that's a reasonable excuse to force the client and server to be the same language.

First off, I don't buy that a client necessarily needs to do validation at all if the server is doing it. In fact, if you're doing complex validation on the client end, I think that is a Bad Thing (TM). What if your validation is wrong? Well you could just fix your server. But now your client's validation doesn't match, unless you're going to go around and force all your clients to update. Maybe at gunpoint or something. Who knows. But regardless, your client is going to think input is valid, and your server won't. Have you handled that case? What does that UI response look like? Have you unit tested it? Were you silly enough to think if it passed validation on the client end, it MUST pass on the server end? Cause if you did, you're screwed.

So I guess my simplest answer would be, if you need to do complicated validation why the heck are you doing it on the client? Just send it to the server, and then let the server return an error. That way you can fix your validation quickly server side if anything goes wrong, and you don't end up in test case hell in case the server and client disagree. You can also update your validation without touching your client code. And it really reduces your test cases and simplifies your unit testing flow.

For very simple validation (i.e. a credit card is always X number of digits, or a user needs to fill in these fields before they can press the submit button), I could see doing client side work. But that validation is so simple it's not hard to re-code. It's also usually so tied to the UI layer, you're going to be writing a lot of platform specific code any way.

I also still don't buy that being able to share code like that is worth the cost of locking entire ecosystems to a language and stifling language development in favor of a monoculture.

Again, if this is the metric we're working on, I could just take it up one level and say everyone should learn JavaScript instead of Java (and everyone should stop using Java) because you can't run Java in a web browser... Well... I take it back. Maybe it's an argument for the return of Java applets instead. :)

about 4 months ago

Ask Slashdot: Where Can I Find Resources On Programming For Palm OS 5?

maccodemonkey Re:Don't repeat yourself in a multilingual project (170 comments)

An application can be separated into logic and presentation, or model and view, however your framework prefers to describe them. A program may require separate presentation for each platform, but versions of a program for multiple platforms should ideally share the logic. But some platforms strongly recommend or even require use of certain languages. How can a programmer follow the rule of not repeating yourself to share logic across languages? Say I developed a game in Java or Objective-C but I want to port it to a Microsoft platform that allows only C#. (In theory it allows any language that compiles to verifiably type-safe .NET Compact Framework bytecode, but in practice that means C#.) How would I go about making and maintaining that port so that fixes to defects in the logic of the version on the original platform propagate to the version on the Microsoft platform ?

I don't think writing logic should be a gating factor that keeps a developer from using the right tool for the job, or keep a developer or a community locked in single language programming hell. There are edge cases (I've worked on an Android/iOS app that kept a bunch of code in JavaScript because it runs on both), but this doesn't even make the answer automatically "Java". I could very well say that developer should just go learn JavaScript because it runs on everything.

But more to the point, I don't usually see server architectures and client architectures sharing too much in the way of logic code, and the code they share typically isn't that complex, and doesn't usually require much work to port from one language to the next.

about 4 months ago

Ask Slashdot: Where Can I Find Resources On Programming For Palm OS 5?

maccodemonkey Re: Not worth it (170 comments)

Android Java knowledge is reusable for... Server side development?

The biggest time suck for learning a new platform is the platform itself, not the language. If we're comparing platforms, Android is like programming on the moon, and server side development is like programming on Saturn. A new programming language should only take a week or two to learn. The platform takes years. Android doesn't have much in common with a web platform. Unless Tomcat got an API to do mobile UI and touch handling, and Android got an API for failover and distributed services, they don't really have much in common at all.

If a developer is scared to cross to any platform because they don't want to be multi-lingual, they're doing it wrong. Java, Obj-C, Swift and C# are all pretty much the same thing, just with some syntax changes. Heck, there is even a family tree there. Java was based on Obj-C, and C# was based on Java. Swift is based on all of them.

about 4 months ago


maccodemonkey hasn't submitted any stories.


maccodemonkey has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?