ITKeyword,专注技术干货聚合推荐

注册 | 登录

c - What is the fastest way to draw a 2D array of color triplets on screen?

itPublisher 分享于

2021腾讯云限时秒杀,爆款1核2G云服务器298元/3年!(领取2860元代金券),
地址https://cloud.tencent.com/act/cps/redirect?redirect=1062

2021阿里云最低价产品入口+领取代金券(老用户3折起),
入口地址https://www.aliyun.com/minisite/goods

up vote 8 down vote favorite 2 The target language is C/C++ and the program has only to work on Linux, but platform independent solutions are preferred obviously. I run Xorg, XVideo and OpenGL are available. How many FPS can I expect on 1024x768 on an Intel Core 2 Duo with Intel graphics? (ONLY drawing counts, consider the arrays to be ready in RAM; no precise prognosis needed) c linux opengl draw xorg
  |
  this question edited Apr 7 '16 at 22:07 Ciro Santilli 烏坎事件2016六四事件 法轮功 67.4k 13 274 204 asked Feb 2 '09 at 16:29 Johannes Weiß 31.1k 8 70 110      What is a "2D array of color triplets"? A nice modern computer, with some hardware acceleration should be able to put quite a few triangles on a screen at a rate of more than 30fps, without storing anything in VRAM. Putting VRAM to use is easy, though, and will boost that rate even higher. –  Jay Kominek Feb 2 '09 at 17:07      a RGB-triplet. For every pixel I've got 3 values (one red, one green and one blue) –  Johannes Weiß Feb 2 '09 at 20:37      SDL version stackoverflow.com/questions/28279242/… –  Ciro Santilli 烏坎事件2016六四事件 法轮功 Apr 7 '16 at 22:08



 |  6 Answers

up vote 9 down vote ---Accepted---Accepted---Accepted---

The fastest way to draw a 2D array of color triplets: Use float (not byte, not double) storage. Each triplet consists of 3 floats from 0.0 to 1.0 each. This is the format implemented most optimally by GPUs (but use greyscale GL_LUMINANCE storage when you don't need hue - much faster!) Upload the array to a texture with glTexImage2D Make sure that the GL_TEXTURE_MIN_FILTER texture parameter is set to GL_NEAREST Map the texture to an appropriate quad. This method is slightly faster than glDrawPixels (which for some reason tends to be badly implemented) and a lot faster than using the platform's native blitting. Also, it gives you the option to repeatedly do step 4 without step 2 when your pixmap hasn't changed, which of course is much faster. Libraries that provide only slow native blitting include: Windows' GDI SDL on X11 (on Windows it provides a fast opengl backend when using HW_SURFACE) Qt As to the FPS you can expect, drawing a 1024x768 texture on an Intel Core 2 Duo with Intel graphics: about 60FPS if the texture changes every frame and >100FPS if it doesn't. But just do it yourself and see ;)
  |
  this answer edited Feb 2 '09 at 19:09 answered Feb 2 '09 at 19:02 Iraimbilanja      SDL texture sprites on X11 also seems to use hardware acceleration today. See also: stackoverflow.com/questions/21392755/… and try the test/testspriteminimal.c example on version 2.0 in Ubuntu 15.10, nvidia-settings says that GPU usage goes up to 100%, and FPS looks high. –  Ciro Santilli 烏坎事件2016六四事件 法轮功 Apr 8 '16 at 10:43



 |  up vote 6 down vote I did this a while back using C and OpenGL, and got very good performance by creating a full screen sized quad, and then use texture mapping to transfer the bitmap onto the face of the quad. Here's some example code, hope you can make use of it. #include <GL/glut.h>

#include <GL/glut.h>

#define WIDTH 1024

#define HEIGHT 768

unsigned char texture[WIDTH][HEIGHT][3];

void renderScene() {

// render the texture here

glEnable (GL_TEXTURE_2D);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

glTexImage2D (

GL_TEXTURE_2D,

0,

GL_RGB,

WIDTH,

HEIGHT,

0,

GL_RGB,

GL_UNSIGNED_BYTE,

&texture[0][0][0]

);

glBegin(GL_QUADS);

glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0, -1.0);

glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0, -1.0);

glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0,

1.0);

glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0,

1.0);

glEnd();

glFlush();

glutSwapBuffers();

}

int main(int argc, char **argv) {

glutInit(&argc, argv);

glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);

glutInitWindowPosition(100, 100);

glutInitWindowSize(WIDTH, HEIGHT);

glutCreateWindow(" ");

glutDisplayFunc(renderScene);

glutMainLoop();

return 0;

}


  |
  this answer answered Feb 2 '09 at 18:59 jandersson 831 1 11 24      I would recommend this technique. Two minor points: 1) depending on GPU, non-power of two texture sizes might not be supported. 2) for subsequent frames, it's better to use glTexSubImage() on an existing texture, for performance reasons. –  codelogic Feb 2 '09 at 19:53      OpenGL does not work this way any more. You may be able to get away in some cases with emulating the fixed function pipeline but it is not a good idea. The new GL stuff is a great
 ment on the older stuff but sadly the setup is kind of intense. –  Jessy Diamond Exum Mar 13 '15 at 7:21



 |  up vote 1 down vote If you're trying to dump pixels to screen, you'll probably want to make use of sdl's 'surface' facuility. For the greatest performance, try to arrange for the input data to be in a similar layout to the output surface. If possible, steer clear of setting pixels in the surface one at a time. SDL is not a hardware interface in its own right, but rather a portability layer that works well on top of many other display layers, including DirectX, OpenGL, DirectFB, and xlib, so you get very good portability, and its a very thin layer on top of those technologies, so you pay very little performance overhead on top of those.
  |
  this answer answered Feb 2 '09 at 17:28 TokenMacGuy 430 1 5 7

 |  up vote 1 down vote Other options apart from SDL (as mentioned) Cairo surfaces with glitz (in C, works on all plaforms but best in Linux) QT Canvas (in C++, multiplaform) OpenGL raw API or QT OpenGL (You need to know openGL) pure Xlib/XCB if you want to take into account non-opengl plaforms My suggestion QT if you prefer C++ Cairo if you prefer C
  |
  this answer answered Feb 2 '09 at 18:22 kazanaki 5,234 8 35 66

 |  up vote 1 down vote the "how many fps can i expect" question can not be answered seriously. not even if you name the grandpa of the guy who did the processor layouting. it depends on tooooo many variables. how many triplets do you need to render? do they change between the frames? at which rate (you wont notice the change if its more often than 30times a sec)? do all of the pixels changes all of the time or just some of the pixels in some areas? do you look at the pixels without any perspective distortion? do you always see all the pixels? depending on the version of the opengl driver you will get different results this could go on for ever, the answer depends absolutly on your algorithm. if you stick to the opengl approach you could also try different extensions (http://www.opengl.org/registry/specs/NV/pixel_data_range.txt comes to mind for example), to see if it fits your needs better; although the already mentioned glTexSubImage() method is quite fast.
  |
  this answer answered Apr 30 '09 at 15:50 akira 4,943 19 33

 |  up vote 0 down vote How many FPS can I expect on 1024x768? The answer to that question is dependent on so many factors that it’s impossible to tell.
  |
  this answer answered Feb 2 '09 at 16:34 Bombe 51.1k 12 93 108      Ok, let's precise that: - statements like 'between 60 and 100 FPS' or 'surely below 30 FPS' are ok - only drawing counts, imagine all these arrays are ready in RAM (not cached or in video card memory) –  Johannes Weiß Feb 2 '09 at 16:37 1   Well, you will surely get something between 0 and ∞ fps. –  Bombe Feb 2 '09 at 18:56      Does not answer the question at all. –  Luc Jan 5 at 9:31

 | 

up vote 8 down vote favorite 2 The target language is C/C++ and the program has only to work on Linux, but platform independent solutions are preferred obviously. I run Xorg, XVide

相关阅读排行


相关内容推荐

最新文章

×

×

请激活账号

为了能正常使用评论、编辑功能及以后陆续为用户提供的其他产品,请激活账号。

您的注册邮箱: 修改

重新发送激活邮件 进入我的邮箱

如果您没有收到激活邮件,请注意检查垃圾箱。