Jump to content

Graphics hardware compatibility


Recommended Posts

I was thinking a bit about our OpenGL system requirements (in terms of capabilities rather than performance).

We need GL 1.3 features at an absolute minimum - anything older than that (e.g. the pre-Vista Windows software GL driver) is never going to work.

We currently want GL 2.0 features a maximum - we don't use anything that's not part of the 2.0 core (except for GL_EXT_framebuffer_object, as an optional optimisation for shadows).

I think it's worthwhile that we support fairly ancient graphics hardware. Particularly on Linux it seems people often have old low-end systems, and I imagine they'd appreciate a new game that supports their OS and doesn't expect them to buy a new gaming graphics card. (I've seen quite a few posts on the forum from people with low-end graphics so it's not just hypothetical, though I have no idea of the relative numbers.)

But one consequence is that our renderer has got quite convoluted, with high-performance paths and lower-performance multipass/software-emulated fallback paths in several areas. That makes it inflexible - if someone wants to add a feature like better fog rendering (which I probably do), they need to edit something like six RenderModifiers (which set up the GL texture state) and four ModelVertexRenders (which set up the vertex array state). Other possible feature additions (e.g. adding normal maps) or complex bug fixes will face similar difficulties.

It also makes it hard to test, given the number of independent alternative paths - there's something like 3 shadow modes, 2 player-colour modes, 2 vertex-processing modes, 2 VBO modes, 2 FBO modes, etc, so there's no way to verify every combination.

So I think I want to propose raising our requirements, in order to remove some of this compatibility code. (I'm not sure I agree with all of these proposals, but I want to see what other people think (y))

Proposal #1: Always require >= 4 TMUs (i.e. GL_MAX_TEXTURE_UNITS).

Currently GL only guarantees 2. Our code is compatible with 2, but shadows are disabled unless you have 3 (since recent fog-rendering changes).

If we always require 3, we can get rid of SlowPlayerColorRender, which saves ~150 lines of code and one path. If we require 4, we should avoid problems when adding better fog rendering to models.

This will prevent the game rendering correctly on GeForce4 MX and Radeon 7500 and older. (Newer Radeons, and GeForce3+ and Intel GMA 900 have >= 4, so they'll be fine.) (NB: GeForce4 MX counts as older than GeForce3, but otherwise "older" basically means "lower number".)

Proposal #2: Don't support shadows without GL_ARB_shadow and GL_ARB_depth_texture.

Non-depth-texture shadows don't support self-shadowing. On my Intel 4500MHD they're hugely slower than depth-texture shadows; on my GF8 there's no difference. For low-end hardware we should probably have a totally new shadow system that just draws grey blobs under units and buildings, so there doesn't seem any value in keeping this fancier shadow system working there. This would save some tens of lines of code throughout various functions, and would save another path to test.

We'd lose shadows on Intel 865G and Radeon 9250 and GeForce4 MX and older.

Proposal #3: Always require GLSL vertex shaders (i.e. ARB_shader_objects and ARB_shading_language_100 and ARB_vertex_shader).

When vertex shaders are available, we use them for lighting and for instancing (an optimisation when rendering the same geometry in multiple positions). Without them, we compute lighting in software and disable instancing.

If we require vertex shaders, we can get rid of FixedFunctionModelRenderer (~400 lines) and one major path.

We'd lose compatibility with Intel 945G and Radeon 9250 and older. (We'd still work with Intel 965G, Radeon 9500, and any NVIDIA). Some old drivers emulate vertex shaders in software but that shouldn't really be slower than our own software processing. We might hit some bugs like this if newer drivers haven't fixed them, in which case we should just find and fix our bugs. So the risks aren't entirely trivial, but I think the simplification to the code and added flexibility would be significant.

Proposal #4: Always require GLSL fragment shaders (i.e. ARB_fragment_shader).

Currently we only use fragment shaders for fancy water; everything else uses the cumbersome multitexturing system (in which it takes about 12 lines of code to multiply a texture by a colour). If we required them, we could incrementally rewrite all the existing multitexturing code to be far simpler and cleaner and more flexible.

Compared to requiring vertex shaders, we'd lose compatibility with GeForce3 and GeForce4. (Intels and Radeons shouldn't be affected). I have no idea of the performance effects on old hardware, or whether it's particularly buggy with old drivers. (New hardware almost certainly implements multitexturing on top of shaders, but I don't know if early implementations (e.g. GeForce FX) or Intel ones are much slower with shaders.)

We've stated GeForce3 as our minimum requirement, but we stated that in like 2003. The Unity hardware survey (which is far less gamer-focused than the Steam survey) puts the GF4 at 0.1%, and GF3 at 0.0%. That's not a lot (and it'll only decrease over time). I liked my GeForce4, until it burnt itself out (in like 2006), since it was nice and fast and did everything except pixel shaders, so I'd be sad to abandon it, but maybe it's time to move on :P

Link to post
Share on other sites

I don't have too much to say about the specifics, but in general I'd say it's not worth supporting too much old hardware, especially not if it slows down development so much that they will be completely obsolete by the time the game is finished in either case (y)

About the specific proposals: I have no idea how common either graphics card is/the technical details, but I think the second proposal is a no-brainer. If it gives us benefits, and only makes the game looks simpler on older hardware (rather than not work at all), then I'd say go ahead with it. The others are harder to decide, but that shouldn't be a problem, especially since they game would probably be slow with shadows on those cards in either case.

Link to post
Share on other sites

I agree that it'd be good to cut down on the complexity of the renderer; let's just weigh the cost/benefits of each:

Proposal #1: Always require >= 4 TMUs

This one's hardly controversial - Radeon 7500 was introduced in 2001 and GF4MX is a joke of a card.

Proposal #2: Don't support shadows without GL_ARB_shadow and GL_ARB_depth_texture.

Agree with Erik, the game works well enough without shadows, so no problem there.

Proposal #3: Always require GLSL vertex shaders

This is where it starts to hurt. 945G is only 5 years old and still quite common in the Unity survey.

I think it's also worth keeping the fixed-function fallback in case of driver trouble/bugs, and to support integrated/crappy discrete graphics commonly found in laptops.

Proposal #4: Always require GLSL fragment shaders

Also questionable per the above argument. 945G is reported to have problems with shaders. Weren't you using a 945 until recently? Did it work OK? I think that's one platform that's definitely worth supporting.

Link to post
Share on other sites
Weren't you using a 945 until recently? Did it work OK? I think that's one platform that's definitely worth supporting.

I had a 945GM - it ran, but (if I remember correctly) very slowly. It was enough to develop the game but I don't think it would really count as playable. I only get ~20fps on a 4500MHD, which should be well over twice as fast. I don't have any exact performance figures for the 945, though - does anyone here still have access to that hardware?

In the absence of additional data, I think it seems the 945 will be too slow to ever run the game acceptably, so there doesn't seem any value in maintaining compatibility with its feature set.

Link to post
Share on other sites

Per discussion on IRC:

The 965 didn't always support GLSL shaders, but it does with newer drivers, and we can tell users to upgrade their drivers, so that's okay.

Apart from the GeForce4, we don't remember any problems that were resolved by the fixed-function renderpath, so I don't think it's apparent that we need to keep it for compatibility with buggy drivers.

It would be great if we had real data about the compatibility and performance issues, rather than having to guess. We don't have the resources to test on a wide range of hardware ourselves, but we could get users to do it (as part of this). In particular it can report their GPU and driver capabilities; and also it could report some rough performance figures (perhaps by a very simplistic mechanism like capturing the in-game profiler's output a few seconds after they first launch a map). That should give us thousands of data points to make more informed judgements about what hardware is commonly used and has good enough performance to be worth supporting.

If we do this, it'd be very useful to get it into Alpha 4, so we could start collecting data immediately and use it as a guide before Alpha 5. I'd guess that implementing this might delay the release by a week (hopefully a bit less), but I'm currently thinking the extra two months of data (compared to waiting until the next release) might be worth that.

Link to post
Share on other sites

Here's my two cents. Supporting old hardware is one thing, making it looks best is another. If a graphics card is too old, and doesn't support things for FOW or Shadows), the game should still run on it, but with shadows disabled, water reflections disabled, and FoW becomes SoD.

So the idea is, the newer the graphics card, the more graphic features is 'unlocks'.

I don't fully understand all the proposals, but sounds like #1 and #2, you could test very easily.

if (GL_MAX_TEXTURE_UNITS < 4) {
fow.opacity = 0;
fow.colour = rgb(0,0,0);
shadows.enabled = false;
water.reflections.enabled = false;
}
if (!GL_ARB_shadow || !GL_ARB_depth_texture) {
shadows.enabled = false;
}

9 lines of code, and you can get rid of dozens of lines of old code that tries to support rendering where GL_MAX_TEXTURE_UNITS < 4, while still 'supporting' (making it runable) on old hardware.

As for base line requires, 512mb of graphics is pretty common these days, and most with that much RAM should support the proposals?? So it might be worth saying "minimum: 256mb, recommended: 512mb".

Link to post
Share on other sites

The coding's not that simple in reality (y). Well, for disabling shadows it is since we already support a no-shadow mode and just need to enable it. But for <4 texture units we'd need a whole separate rendering environment that doesn't try using the non-existent units and that still renders something sensible (i.e. doesn't break gameplay by making objects visible or invisible at the wrong times), which is hundreds of lines to maintain and test (and it won't help anything except a ten-year-old Radeon). (See e.g. FastPlayerColourRender vs SlowEtc here. Not even that Radeon will need the Slow path so it's pretty much a total waste of energy.)

Link to post
Share on other sites

Alright, then we support hardware no older than 3 years? Sounds about how long someone might have a computer before upgrading to something new.

But I agree, we should collect user data. Something in the game, which on loading, prompts the user "Help us improve, send us anonymous usage data", send cpu, gpu etc stats. Note: It need to prompt them, we cannot(!) send data without their permission, and it should only be done once, so they load the game, and accept or deny, it shouldn't ask again when they load a second time.

These changes should be in Alpha 4 if we want any hope of tidying up that render code in Alpha 5. If you need me to, I'll write a Rails application (or PHP, but I'd so much more prefer Rails), which can accept the data packet in JSON, parse it, and store it into a MySQL database.

Link to post
Share on other sites

I've kept computers far longer than 3 years myself :). With proposal #3 we'd still support NVIDIA cards that are 10 years old (similar for ATI), and Intel chips (which have historically been unusable for gaming) that are 4 years old. With proposal #4 we'd support NVIDIA cards 8 years old (and still 4 for Intel).

Link to post
Share on other sites

I wouldn't expect my 5 year old intel chip computer to work with newer games, I haven't used it for 2 years now. If it helps with development I'd support raising the requirements, especially since there's still time before release candidate stage.

Link to post
Share on other sites

Out of interest, what graphics device does your old computer have? (If you've got easy access to it and have run the game on it then system_info.txt in the logs directory would say.)

I experimented a little with fragment shader performance on Intel GMA 4500MHD (i.e. pretty slow non-gaming mobile GPU from 2008) on Linux (Mesa 7.10, xf86-video-intel 2.13.0). I made a map with 300 of hele_wall, filling the screen at default zoom. I replaced FastPlayerColorRender with a shader like

void main() {
vec4 tex = texture2D(textureMap, gl_TexCoord[0].st);
gl_FragColor.rgb = gl_Color.rgb * tex.rgb * mix(playerColour.rgb, vec3(1.0, 1.0, 1.0), tex.a);
}

I disabled vertex shaders (since they conflict with fragment shaders in the current renderer design). I disabled terrain rendering, and shadows. I ran at a resolution of 2960x1440. It ran at around 7fps, and enabling the fragment shader (vs the original multitexturing code) made it about 10% slower. Disabling texturing entirely made it about 10% faster than the multitexturing.

I think I can possibly conclude that fragment shaders are slightly slower than equivalent multitexturing in this specific situation, but only by a little - it takes a lot of objects and a very high resolution to make performance be fill-rate limited, and then the texturing/shading is still only a small fraction of that. In normal viewing conditions (i.e. not artificial worst-case settings) the difference will be negligible, so there's no practical problem here. (The difference may be radically different with different OS/drivers/hardware, of course, so it's hard to conclude anything in general.)

Link to post
Share on other sites

Out of interest, what graphics device does your old computer have? (If you've got easy access to it and have run the game on it then system_info.txt in the logs directory would say.)

When I moved to my new small flat 4 months ago I took my old computers to my parents house for storage, so definitely not easily accessible anymore. Sorry. Had you asked 6 months ago...

Link to post
Share on other sites

On my old Lenovo the game would go really slow with shadows enabled, without shadows it would run faster, but still slow. I think it was a 865G, i'm not sure though. On my newer laptop it runs much faster with shadows but tends to lag when many units share the screen. The funny thing is, on Linux water reflection works fine, but on windows it does not. And i have to disable shadows for the FOW to work. A blob of black below the units would make everything look more grounded for older GFX cards.

Link to post
Share on other sites

Could you perhaps upload the game's system_info.txt? That might give some useful technical details. (%appdata%\0ad\logs\system_info.txt on Windows, ~/.config/0ad/logs/system_info.txt on Linux, I think; if the behaviour differs between OS then both could be interesting :))

Link to post
Share on other sites

Thanks - that says "Mobile IntelĀ® 965 Express Chipset Family" so it should support fragment shaders (hence water reflections) on Windows if you install up-to-date graphics drivers. This'd be the minimum supported Intel chip under proposals #3 and #4 (except on Linux with very recent drivers).

And i have to disable shadows for the FOW to work.

Hmm, do you mean it renders incorrectly with shadows enabled? What does it look like? Which OS is this on?

In other news: I'm currently working on the code to report users' hardware details (after they opt in) - that seems to be going okay, so hopefully the extra information will help with making decisions here :)

Link to post
Share on other sites
  • 3 weeks later...

The current data (from ~100 users, via SVN and some dev-snapshot Linux packages) shows that GL_ARB_vertex_shader/GL_ARB_fragment_shader are primarily just missing on Intel 945G (as expected) and Mesa R300 (the classic (pre-Gallium) Linux driver for Radeons older than the HD 2000/3000 series (released around 2007)). I wasn't previously aware of that R300 limitation - it sounds a more serious compatibility problem than the 945G (which is too slow to work anyway), depending on how widespread it is (which we can find out when collecting more data from the next release).

On the other hand, 100% of users so far have support for GL_ARB_vertex_program/GL_ARB_fragment_program (the old non-GLSL, assembly-based syntax). That's still more powerful than standard multitexturing, and it's got to be less painful to write and maintain. So:

Proposal #5: Always require GL_ARB_vertex_program and GL_ARB_fragment_program.

We could incrementally rewrite all the existing multitexturing code to be far simpler and cleaner and more flexible. We could rely on more advanced features: multitexturing is limited to 4 'instructions' and one texture per instruction, but ARB_*_program allows lots (even 945G allows 96 instructions and 8 simultaneous textures), so we could do more complex graphical effects (diffuse plus player-colour plus specular plus lighting plus shadows plus fog-of-war plus distance-fog etc) without the complexity and performance cost of multi-pass rendering. (We're pushing the 4-texture limit already, so adding more graphical effects will result in larger slowdowns if we don't switch to programmable shaders than if we do.)

Compared to GLSL, these extensions are not part of the GL standard (so some future drivers might (unlikely) theoretically drop compatibility, and OpenGL ES mobile devices probably don't support them) but they seem almost universally implemented on the desktop. The syntax is much uglier than GLSL, but if that's a problem then for complex shaders we could write in GLSL and use NVIDIA's Cg compiler to convert them to assembly syntax. (Don't want to use Cg at runtime because it's heavy and not open source, but we can run it offline and stick the output in SVN.)

I think we'd still want GLSL for more complex effects (since it's a more powerful language), so we'd probably have to add a shader abstraction into the engine which lets us switch easily between ARB_*_program and ARB_*_shader, which I assume wouldn't be much of a problem - the language syntaxes and APIs differ but the concepts are fundamentally the same.

Link to post
Share on other sites

(For people who can't see the old private thread, it's about doing global illumination like this.)

It wouldn't have a major effect on that, but it would help a bit (same as proposal #4). To be specific:

If we added that lighting, I imagine we'd keep the current sharing of diffuse textures between buildings, and add a secondary low-res lighting texture with a secondary UV coordinate set per building. (The alternative is for each building to have its own diffuse texture with the lighting baked in, with no sharing between models or between polygons on the same model, which sounds a lot more expensive in memory usage (though I have nothing quantitative)). Currently buildings are rendered as something like "(highlightcolour*diffuse*a + playercolour*(1-a)) * shadow + ambient", which needs four steps. As mentioned in some other thread I'd like to do smooth fog-of-war shading on buildings so it'd be like "(highlightcolour*diffuse*a + playercolour*(1-a)) * shadow * fow + ambient", which is five. If we added an extra layer of lighting, it'd be like "(highlightcolour*diffuse*a + playercolour*(1-a)) * lighting * shadow * fow + ambient", which is six. Specular would add another, normal mapping would add more, etc. (We may never need all those things but we'll probably want at least some.)

With traditional non-shader-based multitexturing we can only do four steps at once, so for anything more complex we have to do half the work and then render everything again in a second pass to do the rest of the work, which is a bit slow and pretty awkward and restrictive. With proposal #4/#5, this becomes relatively trivial - you just write out the whole equation and it'll work fine.

There'd still be other challenges with global illumination (doing all the secondary UV unwrapping and computing the lighting and helping modders cope with the new features), but it would become easier to efficiently implement the renderer code to handle it.

Incidentally, I've gone off proposal #5 now: it doesn't help anything except the old R300 drivers (it'd still be incompatible with GeForce 3/4, and everything else supports GLSL) and they wouldn't have shadows (since R300 doesn't have GL_ARB_fragment_program_shadow), so it seems like minimal gain for the effort. Maybe better to keep a stripped-down minimal non-shader-based path (no shadows, no smooth fog-of-war on buildings, ugly water, etc) for maximum compatibility, and prefer a fully GLSL-based path with all the effects - that'd be less effort than trying to keep all the features working in the non-shader-based path, and would allow nicer better-performance graphics on almost all systems. I'm very indecisive, though :P

Link to post
Share on other sites

I'm not sure I would recommend doing anything with global illumination because it would be to much of a strain on the art department. I was just curious, as it was something I always wanted to do. It would probably be just as easy to burn the shadow map into the diffuse map.

If indeed the art department is thinking of redoing all the building artwork and making individual graphic files for each building and creating normal/spec mapping, then a unique (with burned in shadows) diffuse map wouldn't be out of the question.

I think that all of these map types are 'future proofing/assisting' the graphics engine, so that it might be appealing to other developers that might take advantage of these capabilities, even though our current WFG staff might not have the resources to do so.

An aside... W mapping has always intrigued me because it is a portion of UVW dataset that is rarely ever used (to my knowledge). Perhaps could be used to store vertex shading data - again you would be limited to the artist's ability to understand and harness/use the 3d program.

So, with the combination of this thread here and this thread... are you looking at pre-configuring the graphics settings for the users? Dis-allowing graphical features that their graphics card is unable to render correctly? I like your proposal #2. Make a break - separate graphics for old hardware from newer, this would free you to make updates for newer whenever needed without having to worry about the old hardware anymore.

Just need to make sure that playing in one config or the other doesn't give one player an advantage over another.

Link to post
Share on other sites

In theory, I think the global illumination (specifically ambient occlusion) could be done entirely automatically - just need an algorithm to do the UV unwrapping (surely there must be a not-totally-useless one we could copy) and another to compute the lighting (which seems straightforward), so it would involve a one-time programming effort rather than a repeated art effort. Still more effort than not doing it, though :). (NVIDIA has some material on ambient occlusion. In an RTS I think we'd only need it for buildings (not units), and we don't have particularly dynamic lighting (we can pretend there's just a single ambient skylight and ignore the direction), so we wouldn't need any of the fancy real-time stuff and could stick with the simpler static offline approaches.)

I don't think I see the significance of W mapping - how is it different to any other per-vertex attribute, like RGB(A) vertex colours? (OpenGL shaders let you have at least 16 arbitrary 4-component values per vertex, which you can use as texture coordinates or colours or anything else you want, so I think there's no limit other than in the art tools' ability to make the data comprehensible.)

As an attempt to be clearer about my more recent thoughts:

Proposal #6: Split the renderer into two modes: a compatibility mode designed for OpenGL 1.4 capabilities, with no unnecessary or optional features (no shadows, no non-ugly water, no specular maps, no distance fog, etc); and a GLSL shader mode designed for OpenGL 2.0 capabilities (plus FBOs since they're ubiquitous). The shader mode will include some configurable options for performance (optional shadows, lower-quality shadows, lower-quality-but-still-reflective water, etc) so it will still scale to low-end hardware (i.e. integrated Intel graphics).

Few users should get the compatibility mode (it looks like pretty much only the R300 Linux drivers, or people with seriously buggy drivers that break in the shader mode), so we won't spend much effort making it look good or optimising performance, but at least it should be playable (with no unfair (dis)advantages). Then we can focus on implementing most features and most configuration options and most optimisations in a consistent shader framework, which I believe will slightly simplify the code compared to the current fixed-function/shader mixture and will greatly simplify the addition of new features.

Link to post
Share on other sites

Cool idea, that would neat if a separate algorithm could automatically do it. Otherwise, the artist would have to set up each building, one at a time. A batch process would be much nicer. I'm not even sure how ambient occlusion would look with shadows. As you can tell from that screenshot you linked, we did the testing before Nicoli implemented the self shadowing feature.

I guess W mapping isn't that significant. I thought .dae files was saving W data, but looking at them I think it is UV0, and they don't look like they store it. Nevermind :P

Proposal #6 - Nice, I like it (y)

Link to post
Share on other sites

On this subject, I just saw something suggesting per-vertex lighting values as a way to do ambient occlusion, which would be much simpler since there's no need for texture unwrapping and would have no rendering cost. I'd guess our buildings have too few vertexes / too many large polygons to make that look decent, though.

Link to post
Share on other sites
  • 2 weeks later...

Got lots more data now (from over a thousand users), so it's slightly more meaningful to look at numbers.

There's some relatively common devices in the stats that aren't interesting:

* "GDI Generic" - the useless Windows XP software fallback, never going to work.

* Radeons with "OpenGL 1.4 ([...] Compatibility Profile Context)" - misconfigured with indirect rendering which causes poor performance; users should be told to configure their systems properly.

* GeForce2 MX, GeForce3 MX, RAGE/SiS/VIA, probably Mesa DRI R100 - not enough texture units to easily support decent rendering even without shaders, and not enough users (~1% of total) to be worth expending effort on.

The most relevant extension here is GL_ARB_fragment_shader. Excluding the things above, about 14% of users don't support that. That's mostly a load of old Intel chips plus Mesa R300 (fairly old Radeons with recent but feature-poor Linux drivers).

I guess the R300 driver situation could improve over the next year or so, by people moving to the Gallium driver, but the long tail of old Intels and miscellaneous others won't disappear any time soon, so it'll still be maybe 5%-10% of users.

Compare to GL_ARB_fragment_program, which (excluding the above) is only missing for 2% of users, and only on very old hardware.

I'd conclude we definitely can't require GL_ARB_fragment_shader, since that would block over a tenth of our potential users, but it's widespread enough that I think it's worth optimising for GLSL shaders at the expense of that tenth. But I'm now thinking it probably would be worth supporting GL_ARB_fragment_program too, if it doesn't add huge code complexity (which I don't think it should): it'll allow us to have better performance and better graphical effects (water cheaply reflecting the sky map (not reflecting units/etc), shadows (I think), particle effects) for an extra ~12% of users, and the remaining ~2% of users will be on such terrible hardware that we just need to limp along and don't need to bother rendering properly (e.g. we could skip all lighting if that saves some code, as long as it remains playable).

So, new (probably final) proposal, trying to be more concrete this time:

* Implement a shader abstraction that can handle both GL_ARB_{fragment,vertex}_shader and GL_ARB_{fragment,vertex}_program, so the renderer can easily switch between whichever is appropriate.

* Gradually reimplement all current behaviour using GL_ARB_{fragment,vertex}_program.

* Gradually remove current fixed-function behaviour once we reimplement it, if it's not trivial to keep that code around and it's not critical for playability.

* Prefer using GL_ARB_{fragment,vertex}_program when adding new effects. Use GL_ARB_{fragment,vertex}_shader only if it's impossible with _program, or if it would be awkward with _program and is an optional high-end feature (soft shadows, maybe normal mapping, etc).

That sounds to me like the most reasonable compromise between making use of 'modern' (only 8 year old) graphics features, and retaining compatibility so we don't make a noticeable number of players unhappy, given that we apparently have significant interest from players with pretty old and low-end hardware.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...