Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Glyphy: High Quality Glyph Rendering Using OpenGL ES2 Shaders

Unknown Lamer posted about 9 months ago | from the doing-it-the-right-way dept.

Graphics 59

Recently presented at Linuxconf.au was Glyphy, a text renderer implemented using OpenGL ES2 shaders. Current OpenGL applications rasterize text on the CPU using Freetype or a similar library, uploading glyphs to the GPU as textures. This inherently limits quality and flexibility (e.g. rotation, perspective transforms, etc. cause the font hinting to become incorrect and you cannot perform subpixel antialiasing). Glyphy, on the other hand, uploads typeface vectors to the GPU and renders text in real time, performing perspective correct antialiasing. The presentation can be watched or downloaded on Vimeo. The slide sources are in Python, and I generated a PDF of the slides (warning: 15M due to embedded images). Source code is at Google Code (including a demo application), under the Apache License.

Sorry! There are no comments related to the filter you selected.

Openhardware video card (1)

r.freeman (2944629) | about 9 months ago | (#45968415)

Now that is something I would like to have. And probably moving to this rendering described here removes that option.

Re:Openhardware video card (1)

Anonymous Coward | about 9 months ago | (#45968497)

and you wrote this on your open hardware CPU and Mobo looking at your open hardware monitor?

Re:Openhardware video card (1)

Jmc23 (2353706) | about 9 months ago | (#45968635)

Why? Do you just randomly jump to conclusions so you can feel bad about your life?

Re:Openhardware video card (2)

tepples (727027) | about 9 months ago | (#45968945)

So long as a video card supports pixel shaders that modify the alpha channel, it can support distance field text rendering.

Ok, that pdf.. (3, Funny)

Janek Kozicki (722688) | about 9 months ago | (#45968451)

is the weirdest presentation that I ever saw on slashdot.

Re:Ok, that pdf.. (1)

Joce640k (829181) | about 9 months ago | (#45968575)

And with some of the worst use of fonts....

Re:Ok, that pdf.. (0)

Anonymous Coward | about 9 months ago | (#45968717)

Would it kill the guy to have a freaking screenshot? It's not like, oh, it's a graphical program or anything.

Video of SDF rendering (4, Informative)

tepples (727027) | about 9 months ago | (#45968921)

One of the techniques described in the video is signed distance field (SDF) rendering, where the alpha channel is blurred to indicate distance from the ideal contour. Here's a video of it in action [youtube.com] . It won't help if you're on dial-up or EDGE, but you should be able to get the idea if you're on any sort of broadband.

Re:Video of SDF rendering (2)

spitzak (4019) | about 9 months ago | (#45970623)

That that video [youtube.com] shows the first type of signed distance implementation, while glyphy is a new type of storage.

He shows a texture for this older version quickly at the start of his video. In that version the distance from the center of the pixel to the nearest edge is stored in each pixel (one number per pixel). This has been done for years, btw, and is not new.

In glyphy the actual definition of the nearest circular arc is stored in each pixel (either 3 or 5 numbers per pixel depending on whether a circle or arc segment is used, that is unclear from the presentation. It also stores more than one circular arc as the closest one differs for each point in the pixel, this is also pretty unclear how that is done). So the texture is bigger (maybe, perhaps it could be lower resolution for equal quality). But it gets rid of a lot of artifacts seen in that video, in particular sharp corners are preserved.

A problem is that his shader is too complex and hits bugs in all the current GLSL compilers.

Re:Ok, that pdf.. (1)

behdad (320171) | about 9 months ago | (#45973121)

What do you mean? The presentation is pretty much *all* screenshots!

Re:Ok, that pdf.. (1)

behdad (320171) | about 9 months ago | (#45973113)

I take it as a compliment ;).

WebGL (1)

ComfortablyAmbiguous (1740854) | about 9 months ago | (#45968471)

I would love to know if this can be made to work with WebGL. There are so many possibilities in web applications for really nice font management.

Re:WebGL (1)

UnknownSoldier (67820) | about 9 months ago | (#45969473)

> can be made to work with WebGL ?

WebGL is OpenGL ES :-)

And what fallback? (1)

tepples (727027) | about 9 months ago | (#45970233)

There are so many possibilities in web applications for really nice font management.

Which are all wasted if the end user's browser lacks WebGL support entirely, as is the case with all web browsers for iPhone or iPad, or if the end user's browser detects insufficiency in the underlying OpenGL implementation, as my browser does (Firefox 26.0 on Xubuntu 12.04 LTS on Atom N450). All I get is "Hmm. While your browser seems to support WebGL, it is disabled or unavailable. If possible, please ensure that you are running the latest drivers for your video card", even after doing sudo sh -c "apt-get update; apt-get upgrade" this morning. In about:support I get "Driver Version: 1.4 Mesa 8.0.4" and "WebGL Renderer: Blocked for your graphics card because of unresolved driver issues."

So what fallback would you choose to use when WebGL is unavailable?

Re:And what fallback? (1)

binarylarry (1338699) | about 9 months ago | (#45970625)

This.

I tried running some new fangled webgl demo on an old chevy pickup truck I have in the backyard. It didn't work there either, it just kind of sputtered and emitted white smoke.

Re:And what fallback? (1)

ComfortablyAmbiguous (1740854) | about 9 months ago | (#45975647)

Well, I often end up doing corporate in house sites where the capabilities of every machine that will access the site are known. In these kinds of cases the fallback case become much less important. I agree with your point, however, for public sites.

Re:WebGL (1)

behdad (320171) | about 9 months ago | (#45973127)

As mentioned in the QA section, at some point I had GLyphy compiled through Emscripten to Javascript+WebGL. It was working rather fine. I should try that again.

Re:WebGL (1)

ComfortablyAmbiguous (1740854) | about 9 months ago | (#45975631)

Thanks, I'll take a look

This is great! (1)

PhrostyMcByte (589271) | about 9 months ago | (#45968547)

I've often thought the great potential of Microsoft's DirectWrite was wasted on Direct3D. Having an Open replacement provides so many more opportunities.

damn subpixel antialiasing (1)

Anonymous Coward | about 9 months ago | (#45968599)

Whoever came up with blurry-color subpixel font rendering should be shot. I understand the theory, but it's an optical illusion that is incompatible with my eyeballs. Worse, subpixel rendering is the default in all kinds of places. My eyes hurt just thinking about it. Please oh please do not let this (otherwise very cool idea) make the problem even worse.

Re:damn subpixel antialiasing (1)

Anonymous Coward | about 9 months ago | (#45968755)

Your monitor has unusual pixel ordering or you're insane. It's not an optical illusion at all, it's exactly what its name implies: it uses subpixels to smooth out the edges of a font. That's less of an optical illusion than the color being displayed on your monitor is.

Re:damn subpixel antialiasing (1)

Carewolf (581105) | about 9 months ago | (#45969105)

Or his monitor has very sharp pixels. I had to disable subpixel rendering recently because my new monitor had too sharp pixels. It turned out I could adjust the "sharpness" of the screen and if I turned it down subpixel rendering worked again, but at average sharpness subpixel rendering just caused colored fringes on letters, which is exactly how it is rendered.

Re:damn subpixel antialiasing (1)

Anonymous Coward | about 9 months ago | (#45969613)

Fair enough. That's a monitor image processing artifact though, not an issue with subpixel rendering itself. Sharpness at 50% with an HDMI/DVI/DisplayPort connection will cause any normal monitor not to apply any sharpening/blurring.

Re:damn subpixel antialiasing (4, Insightful)

AC-x (735297) | about 9 months ago | (#45969801)

You know what really annoys me? How almost all 1080p displays these days seem to, by default, take the hdmi video input, slightly up-scale it (to overscan) and sharpen the hell out of it.

What the fuck?? It's a digital signal, they're taking the literally pixel perfect input and ruining it by smearing individual input pixels over several output pixels and putting sharpening artefacts everywhere. Why? When is that ever a good idea?? Why would you ever need to overscan HDMI?

Early-adopter CRT HDTVs (1)

tepples (727027) | about 9 months ago | (#45970273)

Why would you ever need to overscan HDMI?

Because television video is authored with early-adopter CRT HDTVs (and thus with overscan) in mind.

Re:Early-adopter CRT HDTVs (1)

AC-x (735297) | about 9 months ago | (#45971093)

but it's not like there's a black border, why would you not want to view the edges?

Re:Early-adopter CRT HDTVs (1)

tepples (727027) | about 9 months ago | (#45977355)

but it's not like there's a black border, why would you not want to view the edges?

I guess it must throw the composition out of balance, especially for things like news tickers at the bottom and sports scores at the top. And older film and video might still have things like a boom mic just out of the action safe area (but protruding slightly into the overscan).

monitor or TV? (1)

Chirs (87576) | about 9 months ago | (#45971071)

If you're on a monitor then it should not be messing with the signal at all.

If you're on a TV, then it's expecting consumer-grade TV signals and will futz with it. On some better TVs there is a way to tell it that it's a computer signal and then it will skip the mangling and just show it as-is.

Re:damn subpixel antialiasing (1, Insightful)

Goaway (82658) | about 9 months ago | (#45968771)

Maybe your OS is just using the wrong subpixel rendering for your display type.

Re:damn subpixel antialiasing (0)

Anonymous Coward | about 9 months ago | (#45971943)

Or maybe he's using a VGA or DVI-I instead of DVI-D or HDMI. I recently sat down at someone's computer and thought my vision had suddenly gone bad; it turns out he was using a VGA cable on a 1680x1050 monitor.

(obligatory Robin Williams/Jumanji meme: "What year is it?" Maybe OP is still using a monitor from 1995)

Re:damn subpixel antialiasing (2, Funny)

Russ1642 (1087959) | about 9 months ago | (#45968837)

Might I suggest using an oxygen free, mono directional, ultra gold-plated HDMI cable to connect your monitor. It should fix the anti-aliasing flaw that you can somehow detect with your superhuman eyeballs.

Surprising (1)

CanHasDIY (1672858) | about 9 months ago | (#45968609)

Current OpenGL applications rasterize text on the CPU using Freetype or a similar library, uploading glyphs to the GPU as textures. This inherently limits quality and flexibility (e.g. rotation, perspective transforms, etc. cause the font hinting to become incorrect and you cannot perform subpixel antialiasing).

Wow, I never realized rendering text was such a royal pain in the ass.

Re:Surprising (4, Interesting)

PhrostyMcByte (589271) | about 9 months ago | (#45968757)

Although rendering text correctly is maddenly complex, the reasons described here aren't actually any of them.

The things described here are more a result of the good established libraries only being written for the CPU. Not because GPU is more complex, but simply because nobody had taken the time to do it.

Re:Surprising (2)

Carewolf (581105) | about 9 months ago | (#45969153)

If you rendered the glyphs on the GPU you would still cache the rendered glyphs because rendering that many small details on the screen can be quite demanding on the GPU and rendering a lot of textures is what GPUs do all the time and something that is very well optimized.

Re:Surprising (2)

PhrostyMcByte (589271) | about 9 months ago | (#45970373)

Likely very true for simple static text. A number of games do more complex things though, such as 3D huds that shift with movement. This should be able to render them in realtime without sacrificing quality, which is pretty cool.

Re:Surprising (2)

edxwelch (600979) | about 9 months ago | (#45968903)

Most of the time you just display text with no transforms and the times when you do want transformed you don't need it pixel perfect (for example, during a rotation transition effect, the user will hardly notice pixel imperfections while the text is rotating)

Re:Surprising (1)

behdad (320171) | about 9 months ago | (#45973141)

Font-size *is* transform. A scale to be exact. One of the benefits of GLyphy is that you don't need to rasterize the font at every scale. Imagine pinch-zoom for example.

Thanks for the warning about the 15M PDF file ... (1)

Hohlraum (135212) | about 9 months ago | (#45968679)

downloading that sucker could have taken down the entire Internet. ;) :D

Vector UI? (0)

Anonymous Coward | about 9 months ago | (#45968839)

Is a goal of vector rendering, beyond gaming, eventually have the UI completely vector based? Be it desktop, or mobile device?

I just know with systems in the past, for curiousity purposes, dragging a windowed program around, text based or other, would cause a cpu spike and plateau till I stopped. This was on both windows and linux. I'm just wondering if this, glyphy, presumably alleviates a possible roadblack to a completely vector based UI, away from the CPU. It's pretty clear in the presentation that cpu crunching is ignored with the text manipulation he does. It always amazed me of how much graphical elements would seem to spike a cpu, on any OS. It would seem unintuitive to think the gpu was the problem in those cases, but you might be hard pressed with todays hardware offering to see those particular problems persist.

Re:Vector UI? (1)

Russ1642 (1087959) | about 9 months ago | (#45968895)

Wouldn't it be damn easy to generate intermediate bitmaps from the vectors and then use those when dragging and moving windows? You don't need to redo every single calculation every time you re-render something. You only need to do a few more to make sure nothing has changed. Even a modest amount of performance optimization would make 100% vector rendering as fast as the crap we have now.

Re:Vector UI? (0)

Anonymous Coward | about 9 months ago | (#45982841)

That's a great idea if you always snap everything to integer coordinate grid. This will limit panning rate to multiples of integers, etc. This also limits the kerning and offsetting lines of text to integer grid. Everything must be snapped. This will look like shit. The reason for this is that the AA is pre-computed for each glyph for a fixed sub pixel coordinates. Rendering like this is antique and is NOT deployed in any modern web browser, for example. A nice idea from performance POV but quality-wise it's a dead-end.

Subpixel and anaglyphs; distance fields (5, Informative)

tepples (727027) | about 9 months ago | (#45968885)

Subpixel text rendering is just antialiasing with the red channel offset by a third of a pixel in one direction and the blue channel by a third of a pixel in the other direction. I'd compare it to anaglyph rendering, which offsets the camera position in the red channel by one intrapupil distance from the green and blue channels so that 3D glasses can reconstruct it. If the rest of your system performs correct antialiasing of edges (FSAA, MSAA, etc.), the video card will do the subpixel AA for you.

The PDF mentions another technique I've read about in Team Fortress 2, called "SDF" or "signed distance field" fonts. This makes a slight change to the rasterization and blitting steps to store more edge information in each texel. First the alpha channel is blurred along the edges of glyphs so that becomes a ramp instead of a sharp transition, and the glyphs are uploaded as a texture. The alpha forms a height map where 128 is the center, less than 128 is outside the glyph by that distance, and more than 128 is inside the glyph by that distance. This makes alpha into a plane at any point on the contour. The video card's linear interpolation unit interpolates along the blurred alpha, which is ideal because interpolation of a plane is exact. Finally, a pixel shader uses the smooth-step function to saturate the alpha such that the transition becomes one pixel wide. This allows high-quality scaling of bitmap fonts even with textures stored at 32px or smaller. It also allows programmatically making bold or light faces by setting the transition band closer to 96 or 160 or whatever. But it comes at the expense of slightly distorting the corners of stems, so it's probably best for sans-serif fonts.

The PDF also mentions approximating the outline as piecewise arcs of a circle, parabola, etc. and drawing each arc with an arc texture. This would be especially handy for TrueType glyph outlines, which are made of "quadratic Bezier splines", a fancy term for parabolic arcs.

Re:Subpixel and anaglyphs; distance fields (2)

UnknownSoldier (67820) | about 9 months ago | (#45969635)

> The PDF mentions another technique I've read about in Team Fortress 2, called "SDF" or "signed distance field" fonts.

Correct; Valve published this technique in 2007.

http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf [valvesoftware.com]

Re:Subpixel and anaglyphs; distance fields (1)

spitzak (4019) | about 9 months ago | (#45970697)

Yes, this is an improvement on signed distance fields. If I understand it right, it is not the distance to the nearest point, but a definition of the nearest circular arc that is stored in each texture pixel. This seems to preserve corners and thin stems. Though it sounds complex, he in fact has to store more than one arc per pixel (as the closest one varies depending on the position) and it looks like it has to define actual arcs, not circles, which I would imagine complicates the shader greatly.

Re:Subpixel and anaglyphs; distance fields (1)

spitzak (4019) | about 9 months ago | (#45970785)

Subpixel text rendering also needs a filtering step so that the color does not shift (imagine if the shape was such that more of it was in the red area than in the blue area). What happens is the red is made somewhat less than it should and the difference is added to the 4 nearest green and blue pixels, so the overall light is white, just concentrated at the red pixel.

However your description is basically correct. The video said he needed to add a "direction" to make subpixel filtering work, which I don't understand, as he should have been able to generate it from the same data he used to make black and white images. Possibly it is a shortcut to get the filtering at the same time. Or he may be working around Microsoft's patenting of the filtering step.

Re:Subpixel and anaglyphs; distance fields (1)

tepples (727027) | about 9 months ago | (#45979055)

Properly done AA on the edges of glyphs already performs this filtering to an extent.

Utter dribble (4, Interesting)

Anonymous Coward | about 9 months ago | (#45969813)

There is NOTHING that the GPU can do that software rendering on the CPU cannot do. There MAY be a speed penalty, of course (and were you using the CPU to render a 3D game, rather than your GPU, the speed penalty would in in the order of thousands to tens of thousands of times slower).

The reverse is NOT true. There are rendering methods available on the CPU that the GPU cannot implement, because of hardware limitations. Take COVERAGE-BASED anti-aliasing, for instance.

On the CPU, it is trivial to write a triangle-fill algorithm that PERFECTLY anti-aliases the edges by calculating the exact percentage of a pixel the triangle edges cover. Amazingly, this option is NOT implemented in GPU hardware. GPU hardware uses the crude approach of pixel super-sampling- which can be thought of as an increase in the resolution of edge pixels. So, for instance, all 4x GPU anti-aliasing methods effectively increase the resolution of edge pixels by 2 (so a pixel becomes in some sense 2x2 pixels).

Edge coverage calculations, while trivial to implement in hardware, were never considered 'useful' in common GPU solutions.

A 'GLYPH' tends to have a highly curved contour, which sub-divides into a nasty mess of GPU unfriendly irregular tiny triangles. GPUs are most efficient when they render a stream of similar triangles of at least 16 visible screen pixels. Glyphs can be nasty, highly 'concave' entities with MANY failure conditions for fill algorithms. They are exactly the kind of geometric entities modern GPU hardware hates the most.

It gets WORSE, much much worse. Modern GPU hardware drivers from AMD and Nvidia purposely cripple common 2D acceleration functions in DirectX and OpenGL, so they can sell so-called 'professional' hardware (with the exact same chips) to CAD users and the like. The situation got so bad with AMD solutions, that tech sites could not believe how slowly modern AMD GPU cards rendered the new accelerated Windows 2D interface- forcing AMD to release new drivers that backed off on the chocking just a little.

Admittedly, so long as accelerated glyph rendering using the 3D pipeline, and ordinary shader solutions, the crippling will not happen- but the crippling WILL be back if non-gaming forms of anti-aliasing are activated in hardware on Nvidia or AMD cards. Nvidia, for instance, boasts that drawing anti-aliased lines with its 2000dollar plus professional card is hundreds of times faster than doing the same with the gaming version of the card that uses the EXACT same hardware, and actually has faster memory and GPU clocks.

It gets WORSE. Rendering text on a modern CPU, and displaying the text using the 2D pipeline of a modern GPU is very power efficient. However, activate the 3D gaming mode of your GPU, by accelerating glyphs through the 3D pipeline, and power usage goes through the roof.

Re:Utter dribble (1)

Anonymous Coward | about 9 months ago | (#45970459)

The speaker (who is admittedly hard to understand because he apparently has marbles in his mouth) explains that he can't do things that have trivial implementations in OpenGL 3.x because he's intentionally limiting himself to OpenGL ES2.

tl;dr: this guy is doing incremental research on font-rendering with signed distance fields(*) while intentionally holding one hand behind his back.

* = See UnknownSoldier's link to the 2007 paper.

Re:Utter dribble (0)

Anonymous Coward | about 9 months ago | (#45986447)

It's not incremental research on signed distance fields, he's doing curve rasterization on the GPU. Do keep up.

Of course there was a paper on exactly this topic from Microsoft at SIGGRAPH in the mid-00s so it's not anything new, but it's fun to watch.

Re:Utter dribble (1)

Anonymous Coward | about 9 months ago | (#45971473)

Clean the froth from your mouth, then I can tell you the very simple answer. Subpixel rendering is not implemented on current 3D hardware because anything can be rendered at any depth in any order, so if you render a partial color to a pixel, what is it supposed to blend with? 2D scenes are easily sorted while sorting a 3D can be almost impossible. However, it MAY be possible to sort a 3D scene in many situations and it would be nice to put the hardware in a mode where it can assume it can just blend a partial pixel with what the current value ( assuming that is could do that ). Unfortunately I think this is just a case of a very old mindset that keeps most 3D rendering looking good at the expense of extra features ( and I cant blame them, early 3D hardware was already confusing ), I'm sure it will change eventually because I personally feel MSAA is a retard solution.

Re:Utter dribble (1)

deaf.seven (2669973) | about 9 months ago | (#45973759)

On the CPU, it is trivial to write a triangle-fill algorithm that PERFECTLY anti-aliases the edges by calculating the exact percentage of a pixel the triangle edges cover. Amazingly, this option is NOT implemented in GPU hardware. GPU hardware uses the crude approach of pixel super-sampling- which can be thought of as an increase in the resolution of edge pixels. So, for instance, all 4x GPU anti-aliasing methods effectively increase the resolution of edge pixels by 2 (so a pixel becomes in some sense 2x2 pixels).

Edge coverage calculations, while trivial to implement in hardware, were never considered 'useful' in common GPU solutions.

You can do that with a fragment shader. It's most likely not going to be efficient. But certainly possible.

But that's one of the biggest benefits of GPU. It's not just the huge FLOPs number.
It's to get people to think of implementing solutions in an efficient way.

GPUs are best at what's known as 'embarassingly parallel' problems which means that it's so easy to implement a problem in parallel that it's already embarassing. These days these problems are also known as 'perfectly parallel'.
So the GPU forces people to try to come up with and think about solutions that fit this model.

Previous WebGL work-around ... (2)

UnknownSoldier (67820) | about 9 months ago | (#45969959)

I ran into font edges with fringes and halos 2 years back when trying to render an 8-bit luminance font with an arbitrary user specified color. (Blue was the worst offender for fringes.)

I wasn't aware of the Valve's clever SDF solution at the time so I used a different 3-fold solution:

* Generate the texture font atlas offline using custom code + FreeType2
Each font is "natively" exported at various sizes from 8 px up to 72 px.

* Use pre-multiplied alpha blending for rendering instead of the standard alpha blending
                gl.enable( gl.BLEND );
                gl.blendFunc( gl.ONE, gl.ONE_MINUS_SRC_ALPHA );

* Fix the fragment shader to use pre-multiplied alpha:

uniform lowp vec4 uvColor ;
uniform sampler2D utDiffuse;
varying mediump vec3 vvTexCoord;
void main() {
        mediump vec2 st = vec2( vvTexCoord.x, vvTexCoord.y );
        lowp vec4 texel = texture2D( utDiffuse, st );
        lowp float font = texel.a;
        lowp float fade = vvTexCoord.z;
        lowp float premultiply = uvColor.a;
        gl_FragColor = uvColor * font * fade * premultiply;
}

We also pass in vertex alpha to allow each rendered font to "fade out to nothing" hence the non obvious "fade = vvTexCoord.z".

Since the designers aren't doing arbitrary rotations nor scaling our solution looks great.

My boss sent me a link to this article just after I saw it so looks like I'm off to research how easy / hard using SDF is into our WebGL font rendering system. :-)

Speech therapy (0)

landofcleve (1959610) | about 9 months ago | (#45970477)

The guy in the video needs a lot of it before he gives a public speech again.

Re:Speech therapy (1)

behdad (320171) | about 9 months ago | (#45973163)

Thanks for the constructive feedback.

Re:Speech therapy (1)

landofcleve (1959610) | about 9 months ago | (#45977851)

I had considerable difficulty understanding the presenter while listening in a quiet environment. I was unable to render 3/5 words that he spoke. Maybe it was a combination of the poor video quality or bad mic pickup along with his speech impediment, but regardless of the technical obstacles, the onus is still on the speaker to convey their message as articulate as is possible. For that to be true, speech therapy is needed.

Re:Speech therapy (0)

Anonymous Coward | about 9 months ago | (#45982817)

I am non-native English speaker from a country where English isn't used that much. I didn't have any difficulties understanding his speech. Maybe it's because I am not CONDITIONED for a specific speech-pattern of English? It seems to be a handicap to be native English speaker these days.

Re:Speech therapy (1)

landofcleve (1959610) | about 9 months ago | (#45986787)

You're probably right. Doesn't change any of the facts for me. It also doesn't change that it was shared on an English language website with many people who probably have the same 'handicap' of being a native English speaker.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?