Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Plural graphics processors

Subclass of:

345 - Computer graphics processing and selective visual display systems

345501000 - COMPUTER GRAPHIC PROCESSING SYSTEM

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
345505000 Parallel processors (e.g., identical processors) 139
345506000 Pipeline processors 132
345503000 Coprocessor (e.g., graphic accelerator) 48
345504000 Master-slave processors 12
Entries
DocumentTitleDate
20080204460DEVICE HAVING MULTIPLE GRAPHICS SUBSYSTEMS AND REDUCED POWER CONSUMPTION MODE, SOFTWARE AND METHODS - Many computing device may now include two or more graphics subsystems. The multiple graphics subsystems may have different abilities, and may, for example, consume differing amount of electrical power, with one subsystem consuming more average power than the others. The higher power consuming graphics subsystem may be coupled to the device and used instead of, or in addition to, the lower power consuming graphics subsystem, resulting in higher performance or additional capabilities, but increased overall power consumption. By transitioning from the use of the higher power consuming graphics subsystem to the lower power consuming graphics subsystem, while placing the higher power consuming graphics subsystem in a lower power consumption mode, overall power consumption is reduced.08-28-2008
20080259086HYBRID IMAGE PROCESSING SYSTEM - The present invention provides a hybrid image processing system, which generally includes an image processing unit for receiving image data corresponding to a set of images, generating commands for processing the image data, and sending the images and the commands to an image processing unit of the hybrid image processing system. Upon receipt, the image processing unit will recognize and interpret the commands, assign and/or schedule tasks for processing the image data to a set of (e.g., special) processing engines based on the commands, and return results and/or processed image data to the image interface unit.10-23-2008
20080266300Scalable High Performance 3D Graphics - A high-speed ring topology. In one embodiment, two base chip types are required: a “drawing” chip, LoopDraw, and an “interface” chip, LoopInterface. Each of these chips have a set of pins that supports an identical high speed point to point unidirectional input and output ring interconnect interface: the LoopLink. The LoopDraw chip uses additional pins to connect to several standard memories that form a high bandwidth local memory sub-system. The LoopInterface chip uses additional pins to support a high speed host computer host interface, at least one video output interface, and possibly also additional non-local interconnects to other LoopInterface chip(s).10-30-2008
20080316215Computing device for running computer program on video card selected based on video card preferences of the program - A computing device includes a number of video cards. The computing device also includes a mechanism to determine one or more parameters relating to video card parameters of a target computer program. The mechanism is to select a video card from the video cards of the computing device based on the parameters, and is to run the target computer program on the video card selected.12-25-2008
20090079746SWITCHING BETWEEN GRAPHICS SOURCES TO FACILITATE POWER MANAGEMENT AND/OR SECURITY - One embodiment of the present invention provides a system that switches between frame buffers which are used to refresh a display. During operation, the system refreshes the display from a first frame buffer which is located in a first memory. Upon receiving a request to switch frame buffers for the display, the system reconfigures data transfers to the display so that the display is refreshed from a second frame buffer which is located in a second memory.03-26-2009
20090091576INTERFACE PLATFORM - A system and method for presenting video/graphics images on a monitor requires a control unit, with a central processing unit (CPU), a system memory and a frame buffer that are mounted on a motherboard. It also includes a display module, without memory, that is remotely distanced from the control unit and from its motherboard. In this configuration, the display module includes a graphics processing unit (GPU) that is connected to a video monitor. In operation of the system, memory and frame buffer functions are controlled by the CPU in the control unit. A high speed serial bus interlink, that may either be a wire, fiber, or wireless connection, connects the control unit with the display module where images are composed for presentation on the monitor.04-09-2009
20090160865Efficient Video Decoding Migration For Multiple Graphics Processor Systems - Embodiments of the invention as described herein provide a solution to the problems of conventional methods as stated above. In the following description, various examples are given for illustration, but none are intended to be limiting. Embodiments include a frame processor module in a graphics processing system that examines the intra-coded and inter-coded frames in an encoded video stream and initiates migration of decoding and rendering functions to a second graphics processor from a first graphics processor based on the location of intra-coded frames in a video stream and the composition of intermediate inter-coded frames.06-25-2009
20090167771Methods and apparatuses for Configuring and operating graphics processing units - A graphics processing system with multiple graphics processing cores (GPC)s is disclosed. The apparatus can include a peripheral component interface express (PCIe) switch to interface the GPCs to a host processor. The apparatus can also include a transparent bus to connect the GPCs. The transparent bus can be implemented with two PCIe endpoints on each side of a nontransparent bridge where these three components provide a bus interconnect and a control line interconnect between the GPCs. Other embodiments are also disclosed.07-02-2009
20090179902Dynamic Data Type Aligned Cache Optimized for Misaligned Packed Structures - A method and apparatus for processing vector data is provided. A processing core may have a data cache and a relatively smaller vector data cache. The vector data cache may be optimally sized to store vector data structures that are smaller than full data cache lines.07-16-2009
20090189907MEDICAL SUPPORT CONTROL SYSTEM - A medical support control system that can control a medical device, comprising: a plurality of video interface cards that are detachable from the medical support control system and that are used for the medical support control system for converting, when a video signal is input from an external environment, the video signal input from the external environment into a common signal and vice versa, said common signal being different from any video signals input into and output from the plurality of video interface cards and said common signal being commonly used in the medical support control system, and for detecting a change in an information amount of the input video signal; and a switching control card for determining, when it is determined that the video signal was switched on the basis of the detection result, an output path for the common signal obtained by the conversion on the video signal.07-30-2009
20090189908Display Balance / Metering - Method, apparatuses, and systems are presented for processing a sequence of images for display using a display device involving operating a plurality of graphics devices, including at least one first graphics device that processes certain ones of the sequence of images, including a first image, and at least one second graphics device that processes certain other ones of the sequence of images, including a second image, delaying processing of the second image by the at least one second graphics device, by a specified duration, relative to processing of the first image by the at least one first graphics device, to stagger pixel data output for the first image and pixel data output for the second image, and selectively providing output from the at least one first graphics device and the at least one second graphics device to the display device.07-30-2009
20090201302Graphics Rendering On A Network On Chip - Graphics rendering on a network on chip (‘NOC’) including receiving, in the geometry processor, a representation of an object to be rendered; converting, by the geometry processor, the representation of the object to two dimensional primitives; sending, by the geometry processor, the primitives to the plurality of scan converters; converting, by the scan converters, the primitives to fragments, each fragment comprising one or more portions of a pixel; for each fragment: selecting, by the scan converter for the fragment in dependence upon sorting rules, a pixel processor to process the fragment; sending, by the scan converter to the pixel processor, the fragment; and processing, by the pixel processor, the fragment to produce pixels for an image.08-13-2009
20090207178Video Processing with Multiple Graphical Processing Units - One embodiment of a video processor includes a first media processing device coupled to a first memory and a second media processing device coupled to a second memory. The second media processing device is coupled to the first media processing device via a scalable bus. A software driver configures the media processing devices to provide video processing functionality. The scalable bus carries video data processed by the second media processing device to the first media processing device where the data is combined with video data processed by the first media processing device to produce a processed video frame. The first media processing device transmits the combined video data to a display device. Each media processing device is configured to process separate portions of the video data, thereby enabling the video processor to process video data more quickly than a single-GPU video processor.08-20-2009
20090213126Hardware Architecture for Video Conferencing - Video processing architectures, systems, and methods for a multipoint control unit are provided. In one example, a video processing system includes a motherboard and at least one daughterboard, each daughterboard having a plurality of processors interconnected via a daughterboard switch, where the daughterboard switch is configured to switch data between the plurality of processors and between the motherboard and daughterboard. The video processing system may further include a plurality of daughterboards each having an identical hardware and/or mechanical configuration. The plurality of daughterboards may be configured to be mechanically and electrically couplable together in any order, and may be stackable to form a series chain of daughterboards extending from the motherboard, each respective daughterboard switch being further configured to switch data to a daughterboard switch on another daughterboard to permit data flow along said series chain.08-27-2009
20090251472COLLABORATIVE ENVIRONMENTS IN A GRAPHICAL INFORMATION SYSTEM - Collaborative environments in a geographic information system (GIS) are disclosed. Collaboration between multiple processors can be provided within the GIS. A first processor can stream a scenario describing geo-spatial analysis of the image conducted by the first processor. The scenario can include a set of parameters executed by the first processor for review by a user of a second processor. The user of the second processor can transmit a response back to the first processor. The response can include an addition to the scenario, an edit to the scenario, a comment, or acceptance of the scenario. The server can stream the scenario, and/or the images as well as the response between the first and second processors. The image can include three dimensional data and streaming of data can occur across networks such as the Internet.10-08-2009
20090295810INFORMATION PROCESSING APPARATUS - According to one embodiment, an information processing apparatus includes a display module, a first display controller configured to generate a first video signal, a second display controller configured to generate a second video signal, a selection module configured to select one of the first and second video signals, and output the selected video signal to the display module.12-03-2009
20090295811RENDERING MODULE FOR BIDIMENSIONAL GRAPHICS - The disclosure relates to a graphics module for rendering a bidimensional scene on a display screen comprising a graphics pipeline of the sort-middle type, said graphics pipeline comprising: a first processing module configured to clip a span-type input primitive received from a rasterizer module into sub-span type primitives to be associated to respective macro-blocks corresponding to portions of the screen, and to store said sub-span type primitives in a scene buffer; a second processing module configured to reconstruct the span-type input primitive starting from said sub-span type primitives, the second processing module being further intended to implement a culling operation of sub-span type primitives of the occluded type.12-03-2009
20090309884APPARATUS AND METHOD FOR SELECTABLE HARDWARE ACCELERATORS IN A DATA DRIVEN ARCHITECTURE - A method and apparatus employing selectable hardware accelerators in a data driven architecture are described. In one embodiment, the apparatus includes a plurality of processing elements (PEs). A plurality of hardware accelerators are coupled to a selection unit. A register is coupled to the selection unit and the plurality of processing elements. In one embodiment, the register includes a plurality of general purpose registers (GPR), which are accessible by the plurality of processing elements, as well as the plurality of hardware accelerators. In one embodiment, at least one of the GPRs includes a bit to enable a processing element to enable access a selected hardware accelerator via the selection unit.12-17-2009
20090322765Method and Apparatus for Configuring Multiple Displays Associated with a Computing System - A method and apparatus for configuring multiple displays associated with a computing system begins when display preferences regarding at least one of the multiple displays are received. The display preferences indicate desired selections of which images are to be displayed on which displays and may be based on user selections or application selections. Having received the display preferences, a coupling controller within a video graphics processing circuit determines whether the display preferences can be fulfilled in observance of configuration properties. The configuration properties include limitations of the displays (e.g., refresh rate, resolution) and the computing system (e.g., display controller capabilities) and/or rules of the computing system (e.g., at least one screen must be actively coupled at all times). If the display preferences can be fulfilled, the coupling controller causes display controllers to be operably coupled to displays. If, however, the display preferences cannot be fulfilled, the coupling controller determines whether the current configuration can be reconfigured to allow the display preferences to be fulfilled with minimal affect on the perceived current configuration. If so, the coupling controller causes the video graphics processing circuitry to be reconfigured.12-31-2009
20100007667INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a first graphics chip, a second graphics chip, a detection unit, and a display unit. The first graphics chip has a first drawing processing capacity. The second graphics chip has a second drawing processing capacity different from the first drawing processing capacity. The detection unit detects a request to change over from an execution of the first graphics chip to an execution of the second graphics chip. The display unit displays a first window prompting to close an application in execution, in a case where the detection unit detects the request to change over from the execution of the first graphics chip to the execution of the second graphics chip.01-14-2010
20100013839Integrated GPU, NIC and Compression Hardware for Hosted Graphics - A computer graphics processing system includes an integrated graphics and network hardware device having a PCI Express interface logic unit, a graphics processor unit, a graphics memory, a compression unit and a network interface unit, all connected together on a PCI Express adapter card using one or more dedicated communication interfaces so that data traffic for graphics processing and network communication need not be routed over a peripheral interface circuit which has a communications bandwidth that must be shared with other system components.01-21-2010
20100013840Compositing in Multiple Video Processing Unit (VPU) Systems - Systems and methods are provided for processing data. The systems and methods include multiple processors that each couple to receive commands and data, where the commands and/or data correspond to frames of video that include multiple pixels. Additionally, an interlink module is coupled to receive processed data corresponding to the frames from each of the multiple processors. The interlink module selects pixels of the frames from the processed data of one of the processors based on a predetermined pixel characteristic and outputs the frames that include the selected pixels.01-21-2010
20100020086SHARING DISPLAY PROCESSING SYSTEM, DISPLAY PROCESSING SYSTEM, AND DISPLAY METHOD - In a sharing display processing system having a plurality of display processing systems each including one or a plurality of display apparatuses, each display processing system arranges display regions corresponding to the respective display apparatuses on a first memory region shared with another display processing system, arranges contents on a second memory region managed by the self system, extracts a part of the second memory region on which the contents are arranged as an extracted region, and arranges the extracted region on the first memory region. Each display apparatus displays the extracted region arranged within the range of the display region corresponding to itself on the first memory region.01-28-2010
20100026689VIDEO PROCESSING SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR ENCRYPTING COMMUNICATIONS BETWEEN A PLURALITY OF GRAPHICS PROCESSORS - A video processing system, method, and computer program product are provided for encrypting communications between a plurality of graphics processors. A first graphics processor is provided. Additionally, a second graphics processor in communication with the first graphics processor is provided for collaboratively processing video data. Furthermore, such communication is encrypted.02-04-2010
20100026690SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR SYNCHRONIZING OPERATION OF A FIRST GRAPHICS PROCESSOR AND A SECOND GRAPHICS PROCESSOR IN ORDER TO SECURE COMMUNICATION THEREBETWEEN - A system, method, and computer program product are provided for synchronizing operation of a first graphics processor and a second graphics processor in order to secure communication therebetween. A first graphics processor is provided for processing video data. In addition, a second graphics processor is provided for processing the video data. Furthermore, a data structure is provided for use in synchronizing operation of the first graphics processor and the second graphics processor in order to secure communication therebetween.02-04-2010
20100026691METHOD AND SYSTEM FOR PROCESSING GRAPHICS DATA THROUGH A SERIES OF GRAPHICS PROCESSORS - One embodiment of the present invention sets forth a computer device that comprises a central processing unit, a system memory, a system interface coupled to the central processing unit, wherein the system interface includes at least one connector slot, and a high-performance graphics processing system coupled to the connector slot of the system interface. The high-performance graphics processing system further comprises a plurality of graphics processing units that includes a first graphics processing unit coupled to a set of first data lanes of the connector slot from which the multiprocessor graphics system receives data to process, and a second graphics processing unit coupled to a set of second data lanes of the connector slot through which the multiprocessor graphics system outputs processed data.02-04-2010
20100045682Apparatus and method for communicating between a central processing unit and a graphics processing unit - The present invention provides an improved technique for communicating between a central processing unit and a graphics processing unit of a data processing apparatus. Shared memory is provided which is accessible by the central processing unit and the graphics processing unit, and via which data structures are shareable between the central processing unit and the graphics processing unit. A bus is also provided via which the central processing unit, graphics processing unit and shared memory communicate. In accordance with a first mechanism of controlling the graphics processing unit, the central processing unit routes control signals via the bus. However, in addition, an interface is provided between the central processing unit and the graphics processing unit, and in accordance with an additional mechanism for controlling the graphics processing unit, the central processing unit provides control signals over the interface. This enables the GPU to continue to be used to handle large batches of graphics processing operations loosely coupled with the operations performed by the CPU, whilst through use of the additional mechanism it is also possible to employ the GPU to perform processing operations on behalf of the CPU in situations where those operations are tightly coupled with the operations performed by the CPU.02-25-2010
20100053176Video Processing Across Multiple Graphics Processing Units - A processing unit, method, and graphics processing system are provided for processing a plurality of frames of graphics data. For instance, the processing unit can include a first plurality of graphics processing units (GPUs), a second plurality of GPUs, and a plurality of compositors. The first plurality of GPUs can be configured to process a first frame of graphics data. Likewise, the second plurality of GPUs can be configured to process a second frame of graphics data. Further, each compositor in the plurality of compositors can be coupled to a respective GPU from the first and second pluralities of GPUs, where the plurality of compositors is configured to sequentially pass the first and second frames of graphics data to a display module.03-04-2010
20100053177GRAPHICS PROCESSING SYSTEM INCLUDING AT LEAST THREE BUS DEVICES - Multichip graphics processing subsystems include at least three distinct graphics devices (e.g., expansion cards) coupled to a high-speed bus (e.g., a PCI Express bus) and operable in a distributed rendering mode. One of the graphics devices provides pixel data to a display device, and at least one of the other graphics devices transfers the pixel data it generates to another of the devices via the bus to be displayed. Where the high-speed bus provides data transfer lanes, allocation of lanes among the graphics devices can be optimized.03-04-2010
20100066747MULTI-CHIP RENDERING WITH STATE CONTROL - Circuits, methods, and apparatus that provide multiple graphics processor systems where specific graphics processors can be instructed to not perform certain rendering operations while continuing to receive state updates, where the state updates are included in the rendering commands for these rendering operations. One embodiment provides commands instructing a graphics processor to start or stop rendering geometries. These commands can be directed to one or more specific processors by use of a set-subsystem device mask.03-18-2010
20100085365Dynamic Load Balancing in Multiple Video Processing Unit (VPU) Systems - Systems and methods are provided for processing data. The systems and methods include multiple processors that each couple to receive commands and data, where the commands and/or data correspond to frames of video that include multiple pixels. An interlink module is coupled to receive processed data corresponding to the frames from each of the processors. The interlink module divides a first frame into multiple frame portions by dividing pixels of the first frame using at least one balance point. The interlink module dynamically determines a position for the balance point that minimizes differences between the workload of the processors during processing of commands and/or data of one or more subsequent frames.04-08-2010
20100085366METHOD AND APPARATUS FOR RENDERING VIDEO - Multiple Video Graphic Adapters (VGAs) are used to render video data to a common port. In one embodiment, each VGA will render an entire frame of video and provide it to the output port through a switch. The next adjacent frame will be calculated by a separate VGA and provided to an output port through the switch. A voltage adjustment is made to a digital-to-analog converter (DAC) of at least one of the VGAs in order to correlate the video-out voltages being provided by the VGAs. This correlation assures that the color being viewed on the screen is uniform regardless of which VGA is providing the signal. A dummy switch receives the video-output from each of the VGAs. When a VGA is not providing information to the output port, the dummy switch can be selected to provide the video-output of the selected VGA a resistance path which matches the resistance at the video port. This allows the video graphics controller to maintain a constant thermal state.04-08-2010
20100091025SEAMLESS DISPLAY MIGRATION - Exemplary embodiments of methods, apparatuses, and systems for seamlessly migrating a user visible display stream sent to a display device from one rendered display stream to another rendered display stream are described. For one embodiment, mirror video display streams are received from both a first graphics processing unit (GPU) and a second GPU, and the video display stream sent to a display device is switched from the video display stream from the first GPU to the video display stream from the second GPU, wherein the switching occurs during a blanking interval for the first GPU that overlaps with a blanking interval for the second GPU.04-15-2010
20100103179ELECTRONIC DEVICE - Electronic devices with more than one video output terminals and capable of providing distinct videos at different video output terminals. The electronic device comprises first and second display processors driving first and second video output terminals, respectively. The first display processor comprises a blender and a multiplexer. The blender blends a video with image signals, provides a fully-blended video for the first video output terminal, outputs the video, the partly-blended videos and the fully-blended video to the multiplexer. The second display processor is coupled between the output terminal of the multiplexer and the second video output terminal.04-29-2010
20100141664Efficient GPU Context Save And Restore For Hosted Graphics - A computer graphics processing system provides efficient migrating of a GPU context as a result of a context switching operation. More specifically, the efficient migrating provides a graphics processing unit with context switch module which accelerates loading and otherwise accessing context data representing a snapshot of the state of the GPU. The snapshot includes its mapping of GPU content of external memory buffers.06-10-2010
20100164962TIMING CONTROLLER CAPABLE OF SWITCHING BETWEEN GRAPHICS PROCESSING UNITS - A display system is disclosed that is capable of switching between graphics processing units (GPUs). Some embodiments may include a display system, including a display, a timing controller (T-CON) coupled to the display, the T-CON including a plurality of receivers, and a plurality of GPUs, where each GPU is coupled to at least one of the plurality of receivers, and where the T-CON selectively couples only one of the plurality of GPUs to the display at a time.07-01-2010
20100164963SWITCH FOR GRAPHICS PROCESSING UNITS - Methods and apparatuses are disclosed for improving switching between graphics processing units (GPUs). Some embodiments may include a display system, including a plurality of GPUs, a multiplexer coupled to the plurality of GPUs, a timing controller coupled to the multiplexer, where the timing controller may provide an indication signal to the multiplexer indicative of a period when a first GPU is experiencing a first blanking interval.07-01-2010
20100201694ELECTRONIC IMAGE DEVICE AND DRIVING METHOD THEREOF - An electronic image device and driving method thereof allows a user to control a 3D image and eliminates a process for dividing an input image that is 3D image data into a left-eye image and a right-eye image. 3D image data signals may be generated directly from a 3D image signal or from a time divided 2D image signal.08-12-2010
20100220101MULTIPLE GRAPHICS PROCESSING UNIT SYSTEM AND METHOD - Systems and methods for utilizing multiple graphics processing units for controlling presentations on a display are presented. In one embodiment, a dual graphics processing system includes a first graphics processing unit for processing graphics information; a second graphics processing unit for processing graphics information; and a component for controlling switching between said first graphics processing unit and said second graphics processing unit. In one embodiment, the component for controlling complies with appropriate panel power sequencing operations when coordinating the switching between the first graphics processing unit and the second graphics processing unit.09-02-2010
20100220102MULTIPLE GRAPHICS PROCESSING UNIT SYSTEM AND METHOD - Systems and methods for utilizing multiple graphics processing units for controlling presentations on a display are presented. In one embodiment, a dual graphics processing system includes a first graphics processing unit for processing graphics information; a second graphics processing unit for processing graphics information; and a component for controlling switching between said first graphics processing unit and said second graphics processing unit. In one embodiment, the component for controlling complies with appropriate panel power sequencing operations when coordinating the switching between the first graphics processing unit and the second graphics processing unit.09-02-2010
20100245366Electronic device having switchable graphics processors - An electronic device comprises at least two graphics processors, referred to herein as an integrated graphics processor and a discrete graphics processor. In some circumstances, the device may be switched between the integrated graphics processor and the discrete graphics processor. In some embodiments, techniques are implemented to lock temporarily the screen display on the output of a controller while the device executes a switch between graphics processors, thereby eliminating, or at least reducing, the presence of a blank output display on the electronic device. Other embodiments may be described.09-30-2010
20100253690DYNAMIC CONTEXT SWITCHING BETWEEN ARCHITECTURALLY DISTINCT GRAPHICS PROCESSORS - Graphics processing in a computer graphics apparatus having architecturally dissimilar first and second graphics processing units (GPU) is disclosed. Graphics input is produced in a format having an architecture-neutral display list. One or more instructions in the architecture neutral display list are translated into GPU instructions in an architecture specific format for an active GPU of the first and second GPU.10-07-2010
20100277484Managing Three Dimensional Scenes Using Shared and Unified Graphics Processing Unit Memory - Three dimensional scenes may be managed between a central processing unit and a graphics processing unit using shared and unified graphics processing unit memory. A shared bus memory may be synchronized between the central processing unit and the graphics processing unit. The shared bus memory may be used for more often updated components and other memory may be used for less often updated components. In some embodiments, if the graphics processor and the central processor use a common processor instruction set architecture, data can be sent from the central processor to the graphics processor without serializing the data.11-04-2010
20100277485SYSTEM AND METHOD OF DISPLAYING MULTIPLE VIDEO FEEDS - Respective video feeds are provided to at least two viewers using a common display. The display is controlled to simultaneously display an image from a first video feed and an image from a second video feed. The image from the first video feed is displayed within a first wavelength band and the image from the second video feed is displayed within a second wavelength band, and the first and second wavelength bands are distinct. A first filter is selective for transmitting the first wavelength band and not transmitting the second wavelength band. A second filter is selective for transmitting the second wavelength band and not transmitting the first wavelength band. Only the first video feed image is provided to a first viewer using the first filter, and only the second video feed image is provided to a second viewer using the second filter.11-04-2010
20100283789DISPLAY APPARATUS HAVING A PLURALITY OF CONTROLLERS AND VIDEO DATA PROCESSING METHOD THEREOF - A display apparatus includes a first controller, a second controller and a display panel. The first controller includes a first memory and is used for receiving a first portion of pixel data of a frame and storing the first portion of the pixel data into the first memory. The second controller, which is external to the first controller and includes a second memory, is used for receiving a second portion of the pixel data of the frame and storing the second portion of the pixel data into the second memory. The display panel is used for receiving at least the first and the second portion of the pixel data outputted from the first and the second controllers, respectively.11-11-2010
20100289803MANAGING GRAPHICS LOAD BALANCING STRATEGIES - A method and system for managing graphics load balancing strategies are disclosed. The method comprises using a plurality of rendering servers to render a multitude of graphics frames for a display device, wherein each of the rendering servers has an associated workload; identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers; selecting one of the load balancing strategies; and using the selected one of the load balancing strategies to balance the workloads on the rendering servers. One or more defined metrics are monitored; and in response to a defined changed in said one or more defined metrics, another one of the load balancing strategies is selected and used to balance the workloads on the rendering servers. In one embodiment, the load balancing policy can be changed in real-time during the course of an application session.11-18-2010
20100302260Multimedia and Multichannel Information System - The invention relates to a multimedia and multichannel information system comprising a central unit and a plurality of remote units equipped with display, connected to the central unit by means of a transmission network, system characterized in that it is provided with bi-directional data transmission means on said network, and in that said remote units are provided with at least an interface and with audio-video, data and graphs accelerators disposed on a multimedia microprocessor housed in each of said remote units.12-02-2010
20100315427MULTIPLE GRAPHICS PROCESSING UNIT DISPLAY SYNCHRONIZATION SYSTEM AND METHOD - Systems and methods for utilizing multiple graphics processing units for controlling presentations on a display are presented. In one embodiment, a dual graphics processing system includes a first graphics processing unit for processing graphics information; a second graphics processing unit for processing graphics information; a component for synchronizing transmission of display component information from the first graphics processing unit and the second graphics processing unit and a component for controlling switching between said first graphics processing unit and said second graphics processing unit. In one embodiment, the component for synchronizing transmission of display component information adjusts (e.g., delays, speeds up, etc.) the occurrence or duration of a corresponding graphics presentation characteristic (e.g., end of frame, end of line, vertical blanking period, horizontal blanking period, etc.) in signals from multiple graphics processing units.12-16-2010
20100321395DISPLAY SIMULATION SYSTEM AND METHOD - A display simulation system is provided having a flexible design for emulating and/or supporting any number of display types and/or display standards. The display simulation system may include one or more reference drivers that include a virtual graphics processing unit (GPU) and one or more virtual frame buffer drivers. In one embodiment, the display simulation system may implement a virtual display in response to a user selection input. For instance, the user selection input may initiate a simulated hot-plug event on the display simulation system. Based upon the user selection, an appropriate display profile corresponding to the selected display type or standard may be loaded by the display driver. In this manner, the display simulation system may provide for user interaction with the virtual display, such as for testing, verification, benchmarking, or development purposes.12-23-2010
20100328323VIRTUAL GRAPHICS DEVICE DRIVER - Systems and methods are disclosed to enable switching of graphics processing unit (GPU) resources based on different factors. Embodiments include a virtual graphics driver as an interface between GPU drivers and the applications or graphics framework executing on an electronic device. The virtual graphics driver may switch GPU resources from a first GPU to a second GPU by routing function calls to the first GPU or the second GPU. The switching of GPU resources may be based on power management, system events such as hot-plug events, load management, user requests, any other factor, or any combination thereof. In some embodiments, a virtual frame buffer driver is provided that interfaces with the frame buffer of the GPU and provides a virtual view of the frame buffer to manage additional system application programming interfaces (APIs) during the switch.12-30-2010
20110007083DISTRIBUTED PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND DISTRIBUTED PROCESSING METHOD - According to an aspect of the embodiment, a user apparatus transmits a parameter on generation of drawing data to each of drawing data generation apparatuses through a network, to assign generation processing of the drawing data to each of drawing data generation apparatuses. The user apparatus receives the drawing data generated based on the parameter by each of the plurality of drawing data generation apparatuses through the network, and displays the received drawing data. The user apparatus changes the parameter corresponding to the displayed drawing data, and displays a new drawing data corresponding to the changed parameter.01-13-2011
20110025696METHOD AND SYSTEM FOR DYNAMICALLY ADDING AND REMOVING DISPLAY MODES COORDINATED ACROSS MULTIPLE GRAPHCIS PROCESSING UNITS - The present invention provides a method and system for coordinating graphics processing units in a single computing system. A method is disclosed which allows for the construction of a list of shared display modes that may be employed by both of the graphics processing units to render an output in a display device. By creating the list of shared commonly supportable display modes, the output displayed in the display device may advantageously provide a consistent graphical experience persisting through the use of alternate graphics processing units in the system. One method builds a list of shared display modes by compiling a list from a GPU specific base mode list and dynamic display modes acquired from an attached display device. Another method provides the ability to generate graphical output configurations according to a user-selected display mode that persists when alternate graphics processing units in the system are used to generate graphical output.02-03-2011
20110037768System for Emulating Graphics Operations - Disclosed is a system for producing images including emulation techniques using multiple processors. The system provides for emulation of graphics processing resources such that a central processing unit may provide graphics support. Disclosed embodiments include emulation of selected graphics calls as well as emulation of a programmable graphics processor for compatibility with systems having no compatible GPU. Embodiments also include optimization of graphics code for a particular kind of processor.02-17-2011
20110050710Internal, Processing-Unit Memory For General-Purpose Use - Disclosed herein is a graphics-processing unit (GPU) having an internal memory for general-purpose use and applications thereof. Such a GPU includes a first internal memory, an execution unit coupled to the first internal memory, and an interface configured to couple the first internal memory to a second internal memory of an other processing unit. The first internal memory may comprise a stacked dynamic random access memory (DRAM) or an embedded DRAM. The interface may be further configured to couple the first internal memory to a display device. The GPU may also include another interface configured to couple the first internal memory to a central processing unit. In addition, the GPU may be embodied in software and/or included in a computing system.03-03-2011
20110050711VIRTUALIZATION OF GRAPHICS RESOURCES - Graphics resources are virtualized through an interlace between graphics hardware and graphics clients. The interface allocates the graphics resources across multiple graphics clients, processes commands for access to the graphics resources from the graphics clients, and resolves conflicts for the graphics resources among the clients.03-03-2011
20110057935Tiling Compaction in Multi-Processor Systems - A method and system for processing a graphics frame in a multi-processor computing environment are described. Embodiments of the present invention enable the reduction of the memory footprint required for processing a graphics frame in a multi-processor system. In one embodiment a method of processing a graphics frame using a plurality of processors is presented. The method includes determining a respective assignment of tiles of the graphics frame to each processor of the plurality of processors; allocating a memory area in a local memory of each processor, where the size of the allocated memory area substantially corresponds to the aggregate size of tiles assigned to the respective processor; and storing the tiles of the respective assignment of tiles in the memory area of each respective processor.03-10-2011
20110063304CO-PROCESSING SYNCHRONIZING TECHNIQUES ON HETEROGENEOUS GRAPHICS PROCESSING UNITS - The graphics co-processing technique includes receiving display operation for execution by a graphics processing unit on an unattached adapter. The display operation is split into a copy from a frame buffer of the graphics processing unit on the unattached adapter to a buffer in system memory, a copy from the buffer in system memory to a frame buffer of graphics processing unit on a primary adapter, and a present from the frame buffer of the graphics processing unit on the primary adapter to a display. Execution of the copy from the frame buffer of the graphics processing unit on the unattached adapter to the buffer in system memory and the copy from the buffer in system memory to the frame buffer of the graphics processing unit on the primary adapter are synchronized.03-17-2011
20110074791GPGPU SYSTEMS AND SERVICES - Graphics processing units (GPUs) deployed in general purpose GPU (GPGPU) units are combined into a GPGPU cluster. Access to the GPGPU cluster is then offered as a service to users who can use their own computers to communicate with the GPGPU cluster. The users develop applications to be run on the cluster and a profiling module tracks the applications' resource utilization and can report it to the user and to a subscription server. The user can examine the report to thereby optimize the application or the cluster's configuration. The subscription server can interpret the report to thereby invoice the user or otherwise govern the users' access to the cluster.03-31-2011
20110080414LOW POWER MULTI-CORE DECODER SYSTEM AND METHOD - A portable data terminal including a multi-core processor having at least a first core and a second core, at least one illumination assembly and at least one imaging assembly and data storage means configured to store a plurality of program instructions, the program instructions including at least one one-dimensional decoder and at least one two-dimensional decoder.04-07-2011
20110134132METHOD AND SYSTEM FOR TRANSPARENTLY DIRECTING GRAPHICS PROCESSING TO A GRAPHICAL PROCESSING UNIT (GPU) OF A MULTI-GPU SYSTEM - A method for transparently directing data in a multi-GPU system. A driver application receives a first plurality of graphics commands from a first graphics application and selects a first GPU from the multi-GPU system to exclusively process the first plurality of graphics commands. The first plurality of graphics commands is transmitted to the first GPU for processing and producing a first plurality of renderable data. The first plurality of renderable data is stored in a first frame buffer associated with the first GPU. A second plurality of graphics commands is received from a second graphics application and a second GPU is selected to exclusively process the second plurality of graphics commands. The second GPU processing the second plurality of graphics commands produces a second plurality of renderable data. The second plurality of renderable data is stored in a second frame buffer associated with the second GPU.06-09-2011
20110148888METHOD AND APPARATUS FOR CONTROLLING MULTIPLE DISPLAY PANELS FROM A SINGLE GRAPHICS OUTPUT - Techniques for providing image information in a single graphics pipe for operation of a plurality of display panels, where a displaying of an image represented by data in the single graphics pipe is to span the plurality of display panels. In an embodiment, the plurality of display panels is represented to an operating system as a single display. In another embodiment, a scale factor may be determined for the representation of the plurality of display panels to conform to one or more criteria of the operating system.06-23-2011
20110157189SHARED BUFFER TECHNIQUES FOR HETEROGENEOUS HYBRID GRAPHICS - The graphics processing technique includes detecting a transition from rendering graphics on a first graphics processing unit to a second graphics processing, by a hybrid driver. The hybrid driver, in response to detecting the transition, configures the first graphics processing unit to create a frame buffer. Thereafter, an image rendered on the second graphics processing unit may be copied to the frame buffer of the first graphics processing unit. The rendered image in the frame buffer may then be scanned out on the display.06-30-2011
20110157190FAST INTEGER DCT METHOD ON MULTI-CORE PROCESSOR - In a fast integer DCT method on multi-core processor, the instructions executed by a DSP are allocated with regular and symmetrical data flows for improving the hardware utilization of each task engine of a digital signal processor. Thus, common terms exhibit symmetrical arithmetical instructions. The symmetrical arithmetical instructions are properly arranged for task engines in parallel processing. The loading of the digital signal processor can be effectively reduced in performing the integer discrete cosine transformation to accordingly generate the result quickly.06-30-2011
20110164045FACILITATING EFFICIENT SWITCHING BETWEEN GRAPHICS-PROCESSING UNITS - The disclosed embodiments provide a system that facilitates seamlessly switching between graphics-processing units (GPUs) to drive a display. In one embodiment, the system receives a request to switch from using a first GPU to using a second GPU to drive the display. In response to this request, the system uses a kernel thread which operates in the background to configure the second GPU to prepare the second GPU to drive the display. While the kernel thread is configuring the second GPU, the system continues to drive the display with the first GPU and a user thread continues to execute a window manager which performs operations associated with servicing user requests. When configuration of the second GPU is complete, the system switches the signal source for the display from the first GPU to the second GPU.07-07-2011
20110216078Method, System, and Apparatus for Processing Video and/or Graphics Data Using Multiple Processors Without Losing State Information - Method, system, and apparatus provides for the processing of video and/or graphics data using a combination of first graphics processing circuitry and second graphics processing circuitry without losing state information while transferring the processing between the first and second graphics processing circuitry. The video and/or graphics data to be processed may be, for example, supplied by an application running on a processor such as host processor. In one example, an apparatus includes at least one GPU that includes a plurality of single instruction multiple data (SIMD) execution units. The GPU is operative to execute a native function code module. The apparatus also includes at least a second GPU that includes a plurality of SIMD execution units having a same programming model as the plurality of SIMD execution units on the first GPU. Furthermore, the first and second GPUs are operative to execute the same native function code module. The native code function module causes the first GPU to provide state information for the at least second GPU in response to a notification from a first processor, such as a host processor, that a transition from a current operational mode to a desired operational mode is desired (e.g., one GPU is stopped and the other GPU is started). The second GPU is operative to obtain the state information provided by the first GPU and use the state information via the same native function code module to continue processing where the first GPU left off. The first processor is operatively coupled to the at least first and at least second GPUs.09-08-2011
20110216079Partial Display Updates in a Windowing System Using a Programmable Graphics Processing Unit - Techniques to generate partial display updates in a buffered window system in which arbitrary visual effects are permitted to any one or more windows (e.g., application-specific window buffers) are described. Once a display output region is identified for updating, the buffered window system is interrogated to determine which regions within each window, if any, may effect the identified output region. Such determination considers the consequences any filters associated with a window impose on the region needed to make the output update.09-08-2011
20110227934Architecture for Volume Rendering - Architecture for volume rendering is described. In an embodiment volume rendering is carried out at a data centre having a cluster of rendering servers connected using a high bandwidth connection to a database of medical volumes. For example, each rendering server has multiple graphics processing units each with a dedicated device thread. For example, a surgeon working from home on her netbook or thin client is able to have a medical volume rendered remotely at one of the rendering servers and the resulting 2D image sent to her over a relatively low bandwidth connection. In an example a master rendering server carries out load balancing at the cluster. In an example each rendering server uses a dedicated device thread for each graphics processing unit in its control and has multiple calling threads which are able to send rendering instructions to appropriate ones of the device threads.09-22-2011
20110267359SYSTEMS AND METHODS FOR HOT PLUG GPU POWER CONTROL - Systems and methods include an electronic device having multiple GPUs and a GPU power control process that controls switching between a first GPU and a second GPU, such as a high performance GPU. The electronic device may be coupled to an external display by a passive adapter or an active adapter. The GPU power control process may determine if the second GPU is active and switch to the second GPU upon connection of the external display through either the passive adapter or the active adapter. Upon connection of an active adapter, the GPU power control process may use hot plug functionality to determine connection of the external display to the active adapter and provide appropriate switching in response thereto.11-03-2011
20110273458Shared Graphics Infrastructure - Systems and methods that provide for a common device enumeration point to a class of software objects, which represent hardware and can emit 2D bitmaps, via a presentation interface component. Such presentation interface component can further include a factory component that centralizes enumeration and creation for any components that control or communicate with the frame buffer of the graphics display subsystems. Accordingly, a smooth transition can be supplied between full screen and window models, within desktop composition systems, wherein applications can readily support such transitions.11-10-2011
20110298812Seamless Switching Between Graphics Controllers - A system and method for resolving the blank screen issue when switching between graphics processing units. The system and method provide a graphics adapter LCD timing controller (Tcon) with a frame buffer specifically dedicated to storing previously presented screen data for use when switching graphic processing units. The system further includes a protocol comparator unit within a serial-to-parallel converter and a memory controller coupled to the protocol comparator.12-08-2011
20110304635RENDERING PROCESSOR - A main processor collects the edge information and color information of the pixels of a rendering target image using a rendering command, and sends the collected edge information and color information of the pixels to a sub-processor of the succeeding stage. The sub-processor sends the edge information and color information of a left rectangular region to a sub-processor, and also renders a right rectangular region and, upon receiving a process wait signal from the sub-processor, sends the rendering result to the sub-processor. The sub-processor renders the left rectangular region and sends the rendering result to the outside, and also sends, to the outside, the rendering result of the right rectangular region acquired by sending a process wait signal to the sub-processor.12-15-2011
20110310107INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS, AND PROGRAM - There is provided an information processing apparatus, including a first processing unit capable of processing an image, a second processing unit capable of processing the image in parallel for each unit dividing the image, and a controller section configured to perform a control to select one of the first processing unit, the second processing unit, and both of them as a subject or subjects processing the image, to divide, in a case where both the first processing unit and the second processing unit are selected, the image into a first region and a second region, and to assign processing of an image of the first region and processing of an image of the second region, which are obtained by the division, to the first processing unit and the second processing unit, respectively, to cause the first processing unit and the second processing unit to perform the processing.12-22-2011
20110316862Multi-Processor - A multi-processor according to an example of the invention comprises a first control unit which stores first compressed data acquired externally in a first memory, a hardware decoding unit which decodes the first compressed data stored in the first memory and storing the decoded data in a second memory, an encoding processor element which includes at least one of a plurality of processor elements, encodes the decoded data stored in the second memory in accordance with encoding software stored in a third memory, and stores second compressed data obtained by encoding the decoded data in a fourth memory, and a second control unit which outputs the second compressed data stored in the fourth memory to the outside.12-29-2011
20120001925Dynamic Feedback Load Balancing - A method for rendering a scene across N number of processors is provided. The method includes evaluating performance statistics for each of the processors and establishing load rendering boundaries for each of the processors, the boundaries defining a respective portion of the scene. The method also includes dynamically adjusting the boundaries based upon the establishing and the evaluating.01-05-2012
20120032964APPARATUS AND METHOD FOR SELECTABLE HARDWARE ACCELERATORS - A method and apparatus employing selectable hardware accelerators in a data driven architecture are described. In one embodiment, the apparatus includes a plurality of processing elements (PEs). A plurality of hardware accelerators are coupled to a selection unit. A register is coupled to the selection unit and the plurality of processing elements. In one embodiment, the register includes a plurality of general purpose registers (GPR), which are accessible by the plurality of processing elements, as well as the plurality of hardware accelerators. In one embodiment, at least one of the GPRs includes a bit to enable a processing element to enable access a selected hardware accelerator via the selection unit.02-09-2012
20120056891MIGRATING AND SAVE RESTORING A VIRTUAL 3D GRAPHICS DEVICE - A virtual graphics processing unit within a virtual machine may be restored by causing a reset to the virtual graphics processing unit. The state of the virtual graphics processing unit may not be saved during a migration or save and restore operation, but a reset of the virtual graphics processing unit may cause all applications with processes in the virtual graphics processing unit to re-start and thereby recreate the state of the virtual graphics processing unit. A hypervisor may include a separate graphics processor unit process that may present a virtual graphics processing unit to a virtual machine and communicate with a physical graphics processing unit in hardware. When a virtual machine may be restored after a save or migration, the hypervisor may cause the virtual graphics processor unit to reset and its state to be recreated.03-08-2012
20120069029INTER-PROCESSOR COMMUNICATION TECHNIQUES IN A MULTIPLE-PROCESSOR COMPUTING PLATFORM - This disclosure describes communication techniques that may be used within a multiple-processor computing platform. The techniques may, in some examples, provide software interfaces that may be used to support message passing within a multiple-processor computing platform that initiates tasks using command queues. The techniques may, in additional examples, provide software interfaces that may be used for shared memory inter-processor communication within a multiple-processor computing platform. In further examples, the techniques may provide a graphics processing unit (GPU) that includes hardware for supporting message passing and/or shared memory communication between the GPU and a host CPU.03-22-2012
20120081372IMAGE PROCESSOR - An image processing unit includes a computing unit, a data input unit that inputs image data to the computing unit, a data output unit that outputs the image data computed by the computing unit, and a setting unit. The computing unit includes computing cells including multiple types of computing cells, input domain selectors, and at least one of output domain selectors. The setting unit sets the input domain selectors and the output domain selectors so that image data inputted by the data input unit to the computing unit on which desired computing has been performed by at least one computing cell among the computing cells is outputted from the data output unit.04-05-2012
20120098840APPLYING NON-HOMOGENEOUS PROPERTIES TO MULTIPLE VIDEO PROCESSING UNITS (VPUs) - A multiprocessor system includes a plurality of special purpose processors that perform different portions of a related processing task. A set of commands that cause each of the processors to perform the portions of the related task are distributed, and the set of commands includes a predicated execution command that precedes other commands within the set of commands. It is determined whether commands subsequent to the predicated execution command are intended to be executed by a first processor or a second processor based on information in the predicated execution command and the set of commands includes all commands to be executed by each processor.04-26-2012
20120139926MEMORY ALLOCATION IN DISTRIBUTED MEMORIES FOR MULTIPROCESSING - In some aspects, finer grained parallelism is achieved by segmenting programmatic workloads into smaller discretized portions, where a first element can be indicative both of a configuration or program to be executed, and a first data set to be used in such execution, while a second element can be indicative of a second data element or group. The discretized portions can cause program execute on distributed processors. Approaches to selecting processors, and allocating local memory associated with those processors are disclosed. In one example, discretized portions that share a program have an anti-affinity to cause dispersion, for initial execution assignment. Flags, such as programmer and compiler generated flags can be used in determining such allocations. Workloads can be grouped according to compatibility of memory usage requirements.06-07-2012
20120139927MEMORY ADDRESS RE-MAPPING OF GRAPHICS DATA - A method and apparatus for creating, updating, and using guest physical address (GPA) to host physical address (HPA) shadow translation tables for translating GPAs of graphics data direct memory access (DMA) requests of a computing environment implementing a virtual machine monitor to support virtual machines. The requests may be sent through a render or display path of the computing environment from one or more virtual machines, transparently with respect to the virtual machine monitor. The creating, updating, and using may be performed by a memory controller detecting entries sent to existing global and page directory tables, forking off shadow table entries from the detected entries, and translating GPAs to HPAs for the shadow table entries.06-07-2012
20120147015Graphics Processing in a Multi-Processor Computing System - A method, computer program product, and computing system are provided for processing a graphics operation. For instance, the method can include receiving the graphics operation from an application. The method can also include allocating a first portion of the graphics operation to a first processing unit and a second portion of the graphics operation to a second processing unit. This allocation between the first and second processing units can be based on at least one of a performance profile and a functionality profile of the first and second processing units.06-14-2012
20120154410APPARATUS AND METHOD FOR PROCESSING A FRAME IN CONSIDERATION OF THE PROCESSING CAPABILITY AND POWER CONSUMPTION OF EACH CORE IN A MULTICORE ENVIRONMENT - An apparatus and method for processing a frame in consideration of processing capability and power consumption for each core in a multi-core system are provided. To perform a user interface drawing in a multi-core environment, an optimum combination of hardware components capable of operating with the minimum of power consumption while satisfying a requirement of a user may be obtained and a parallel user interface drawing may be performed by use of the optimum combination of hardware components.06-21-2012
20120182302GRAPHICS PROCESSING UNIT AND INFORMATION PROCESSING APPARATUS - According to one embodiment, a graphics processing unit comprises a host interface, a plurality of processing cores, an arithmetic control unit, a video signal output interface, and an audio signal output interface. The host interface is configured to receive video data and audio data from a host. The arithmetic control unit is configured to process the video and audio data using at least a first processing core and a second processing core respectively. The video signal output interface is configured to output a video signal corresponding to the processed video data. The audio signal output interface is configured to output an audio signal corresponding to the processed audio data.07-19-2012
20120188258GRAPHICS PROCESSING DISPATCH FROM USER MODE - A method, system, and computer program product are disclosed for providing improved access to accelerated processing device compute resources to user mode applications. The functionality disclosed allows user mode applications to provide commands to an accelerated processing device without the need for kernel mode transitions in order to access a unified ring buffer. Instead, applications are each provided with their own buffers, which the accelerated processing device hardware can access to process commands. With full operating system support, user mode applications are able to utilize the accelerated processing device in much the same way as a CPU.07-26-2012
20120229476PHYSICAL GRAPHICS CARD USE FOR MULTIPLE USER COMPUTING - The current invention allows the connection of a plurality of monitors to a single host computer, allowing the use by a plurality of users without the additional cost and complexity of a terminal server, local area network and thin clients or additional computers for each user. The host computer includes a video card having at least two separate video outputs, each connected to a monitor. Alternatively or additionally, the host includes a plurality of video cards. Each user interacts with a unique session that executes the user's application and displays its results on one of the monitors. The invention disclosed methods for enabling use of one or more physical graphics cards for one or more user sessions within a single computer. The invention disclosed methods to allow the assignment of a separate video output to each user by using video a plurality of drivers. In some embodiments, a video synchronizer is used by all graphics functions and allowing direct invocation of video card commands to synchronize commands to said video card.09-13-2012
20120229477VIDEO WALL SYSTEM AND METHOD FOR CONTROLLING THE SAME - A method for controlling a video wall system, in which the video wall system includes a plurality of host processors. The method includes the step of transmitting a plurality of continuous commands without time interval therebetween one by one to the host processors and the step of the host processors synchronously performing corresponding operations according to the commands. A video wall system is also disclosed herein.09-13-2012
20120229478REDUCED CONTEXT DEPENDENCY AT TRANSFORM EDGES FOR PARALLEL CONTEXT PROCESSING - A method and apparatus for parallel processing of at least two bins relating to at least one of a video and an image. The method includes determining scan type of at least a portion of the at least one of video and an image, analyzing neighboring position of a bin, removing dependencies of context selection based on the scan type and position of location being encoded in a transform, and performing parallel processing of that least two bins.09-13-2012
20120229479DATA PROCESSING UNIT WITH MULTI-GRAPHIC CONTROLLER AND METHOD FOR PROCESSING DATA USING THE SAME - A portable terminal that includes a first processing core configured to process data; a second processing core, which is faster than the first processing core, configured to process the data; and a storage unit configured to store multimedia data. The first and second processing cores are integrated into a single chipset, and are configured to be individually enabled or disabled based on a workload. The portable terminal is configured to be operated in one of a standby state and an operating state, to play back the multimedia data stored in the storage unit, and for Internet access.09-13-2012
20120236010Page Fault Handling Mechanism - Page faults arising in a graphics processing unit may be handled by an operating system running on the central processing unit. In some embodiments, this means that unpinned memory can be used for the graphics processing unit. Using unpinned memory in the graphics processing unit may expand the capabilities of the graphics processing unit in some cases.09-20-2012
20120249559Controlling the Power State of an Idle Processing Device - A method of operating a processing device is provided. The method includes, responsive to an idle state of the processing device, transitioning the processing device to a substantially disabled state. The processing device, for example, may be a graphics processing unit (GPU). Transitioning the processing device to a substantially disabled state upon detection of an idle state may result in power savings. Corresponding systems and computer program products are also provided.10-04-2012
20120262464Switch for Graphics Processing Units - Methods and apparatuses are disclosed for improving switching between graphics processing units (GPUs). Some embodiments may include a display system, including a plurality of GPUs, a multiplexer coupled to the plurality of GPUs, a timing controller coupled to the multiplexer, where the timing controller may provide an indication signal to the multiplexer indicative of a period when a first GPU is experiencing a first blanking interval.10-18-2012
20120293522Browser for Use in Navigating a Body of Information, with Particular Application to Browsing Information Represented by Audiovisual Data - A method for enabling a user to review a body of information that includes first and second segments from respective first and second information sources includes: storing second segment digital data representing the second segments; receiving an indication that the user has selected for display a particular first segment; identifying one or more of the second segments that are related to the particular first segment by comparing first segment digital data to the second segment digital data; and providing display digital data for display of one or more representations or portions of the identified second segments contemporaneously with display of the particular first segment. The display digital data enables the displayed representations or portions of the second segments to be selected by the user when displayed.11-22-2012
20120313952INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - Disclosed herein is an information processing apparatus including: a first drawing processing block configured to generate a video signal by executing predetermined signal processing on entered image data; a second drawing processing block having a higher drawing processing power than the first drawing processing block and being configured to generate a video signal by executing predetermined signal processing on entered image data; a workload measuring block configured to measure at least one of a workload in the first drawing processing block and a workload in the second drawing processing block; a storage block configured to store an application; and a control block configured to select the first drawing processing block or the second drawing processing block to execute the application read from the storage block, on the basis of at least one of the measured workload in the first drawing processing block and the second drawing processing block.12-13-2012
20120320068DYNAMIC CONTEXT SWITCHING BETWEEN ARCHITECTURALLY DISTINCT GRAPHICS PROCESSORS - Graphics processing in a computer graphics apparatus having architecturally dissimilar first and second graphics processing units (GPU) is disclosed. Graphics input is produced in a format having an architecture-neutral display list. One or more instructions in the architecture neutral display list are translated into GPU instructions in an architecture specific format for an active GPU of the first and second GPU.12-20-2012
20130002688METHOD FOR CONTROLLING MULTIPLE DISPLAYS AND SYSTEM THEREOF - A method and system for controlling multiple displays is provided. The disclosed method is used to control a plurality of graphics processing units (GPUs), wherein every GPU controls one or more displays. The method includes the following steps: providing a graphical interface the same to a graphical program library of an operating system to replace the graphical program library to receive a drawing command from an application program; determining a display set of the GPUs according to a display region of the application program, wherein a frame displayed by the display controlled by each GPU is intersected to the display region; and delivering coordinate-transformed drawing commands to the GPUs in the display set according to the display intersection region, wherein each GPU in the display set only draws the content of the corresponding display intersection region.01-03-2013
20130033503Smart Dual Display System - A secure display system for a movable object, such as an aircraft, includes: a screen comprising at least two independent matrices formed of pixels, each of the matrices being controlled by an independent graphic channel; a light box comprising at least two independent subassemblies, each backlighting each half-screen; two bypass functions, a bypass function being associated with a graphic channel, a bypass function being linked to an input of one of the matrices; a central module having a function of mixing the data originating from the two independent graphic channels, and a function of separating said data, said separation module being connected to said bypass functions; each graphic channel comprising image-generation means; and two power supply means. The display system may be used in an aeroplane.02-07-2013
20130033504Seamless Display Migration - Exemplary embodiments of methods, apparatuses, and systems for seamlessly migrating a user visible display stream sent to a display device from one rendered display stream to another rendered display stream are described. For one embodiment, mirror video display streams are received from both a first graphics processing unit (GPU) and a second GPU, and the video display stream sent to a display device is switched from the video display stream from the first GPU to the video display stream from the second GPU, wherein the switching occurs during a blanking interval for the first GPU that overlaps with a blanking interval for the second GPU.02-07-2013
20130038615LOW-POWER GPU STATES FOR REDUCING POWER CONSUMPTION - The disclosed embodiments provide a system that drives a display from a computer system. During operation, the system detects an idle state in a first graphics-processing unit (GPU) used to drive the display. During the idle state, the system switches from using the first GPU to using a second GPU to drive the display and places the first GPU into a low-power state, wherein the low-power state reduces a power consumption of the computer system.02-14-2013
20130063450SMART POWER MANAGEMENT IN GRAPHICS PROCESSING UNIT (GPU) BASED CLUSTER COMPUTING DURING PREDICTABLY OCCURRING IDLE TIME - A method includes automatically acquiring, through a resource manager module associated with a driver program executing on a node of a cluster computing system, information associated with utilization of a number of Graphics Processing Units (GPUs associated) with the node, and automatically calculating a window of time in which the node is predictably underutilized on a reoccurring and periodic basis. The method also includes automatically switching off, when one or more GPUs is in an idle state during the window of time, power to the one or more GPUs to transition the one or more GPUs into a quiescent state of zero power utilization thereof. Further, the method includes maintaining the one or more GPUs in the quiescent state until a processing requirement of the node necessitates utilization thereof at a rate higher than a predicted utilization rate of the node during the window of time.03-14-2013
20130063451PARALLEL RUNTIME EXECUTION ON MULTIPLE PROCESSORS - A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.03-14-2013
20130069959RENDERING DEVICE, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND IMAGE OUTPUT APPARATUS - A rendering device includes a temporary memory, plural rendering processors, and a rendering controller. The temporary memory stores one or more rendering instructions and rendered results therefor in association with each other. The plural rendering processors each perform rendering processing in accordance with a rendering instruction, store the one or more rendering instructions and rendered results in association with each other in the temporary memory, when one or more similar rendering instructions exist for pages for which rendering processing was consecutively performed, and read and use the rendered results, in a case where rendered results associated with one or more rendering instructions are stored in the temporary memory. The rendering controller controls assigning a given rendering instruction to a corresponding one of the rendering processors in accordance with a given page editing instruction and causing the corresponding one of the rendering processors to perform rendering processing.03-21-2013
20130076761GRAPHICS PROCESSING SYSTEMS - In a tile-based graphics processing system having plural rendering processors, the set of tiles 03-28-2013
20130076762OCCLUSION QUERIES IN GRAPHICS PROCESSING - The fragment processing pipeline 03-28-2013
20130083040METHOD AND DEVICE FOR OVERLAPPING DISPLAY - Embodiments of an apparatus for having overlapping displays and methods for operating such apparatus can provide enhanced display and operational capabilities. The overlapping displays may include multiple overlapping transparent displays. Embodiments of additional apparatus, systems, and methods are disclosed.04-04-2013
20130083041IMAGE DISPLAY DEVICE - To provide an image display device to simplify software for rewriting data, and shorten rewrite time, a vehicle meter 04-04-2013
20130088500Policy-Based Switching Between Graphics-Processing Units - The disclosed embodiments provide a system that configures a computer system to switch between graphics-processing units (GPUs). In one embodiment, the system drives a display using a first graphics-processing unit (GPU) in the computer system. Next, the system detects one or more events associated with one or more dependencies on a second GPU in the computer system. Finally, in response to the event, the system prepares to switch from the first GPU to the second GPU as a signal source for driving the display.04-11-2013
20130120407Seam-Based Reduction and Expansion of Images Using Partial Solution Matrix Dependent on Dynamic Programming Access Pattern - Systems, methods, and computer-readable storage media for resizing images using seam carving techniques may include generation of a partial solution matrix by at least partially isolating dependencies between sub-problems of a dynamic programming problem corresponding to its solution within different regions of an input image. The number and/or shape of the isolated (or partially isolated) sub-problems may be dependent on the access pattern used by a dynamic programming operation to identify seams in the input image. Multiple sub-problems may be processed independently and in parallel on respective processor core(s) or threads thereof to generate the partial solution matrix. The partial solution matrix may then be processed to identify one or more low-cost seams of the input image. The methods may be implemented as stand-alone applications or as program instructions implementing components of a graphics application, executable by a CPU and/or GPU configured for parallel processing.05-16-2013
20130120408GRAPHICS PROCESSING UNIT MODULE - A general-purpose graphics processing unit (GPU) module, a system containing the general-purpose GPU module, and a method for driving the system are provided in accordance with various embodiments of the invention. In an embodiment, a general-purpose GPU module comprises a GPU, a data transfer input/output (I/O) port, a power supply I/O port, a control/SYNC module, and a power supply module. When a new general-purpose GPU module is detected being coupled to the transfer link bus, the graphics processing tasks are allocated to all the coupled general-purpose GPU modules. In accordance with various embodiments of the invention, the costs of designing and using GPUs will be decreased.05-16-2013
20130120409INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND PROGRAM - An information processing apparatus includes a first graphics chip having a first drawing processing capacity and being capable of producing a first image signal; a second graphics chip having a second drawing processing capacity higher than the first drawing processing capacity and being capable of producing a second image signal; an output changeover section capable of selectively outputting one of the first or second image signals; an inputting section configured to input a user operation to select one of the first graphics chip or the second graphics chip; and a control section configured to control the output of the output changeover section in response to the inputted user operation.05-16-2013
20130127882INPUT/OUTPUT DEVICES AND DISPLAY APPARATUSES USING THE SAME - An input/output (I/O) device is provided. The I/O device is capable of operating in a first mode or a second mode. The I/O device includes a first connection unit and a switch unit. The first connection unit has a plurality of down-link I/O ports and an up-link I/O port. The switch unit is controlled by a selection signal. The switch unit has an input terminal coupled to the up-link I/O port, a first output terminal, and a second output terminal. When the I/O device is operating in the first mode, the switch unit couples the input terminal to the first output terminal according to the selection signal. When the I/O device is operating in the second mode, the switch unit couples the input terminal to the second output terminal according to the selection signal.05-23-2013
20130141442METHOD AND APPARATUS FOR MULTI-CHIP PROCESSING - Various methods, computer-readable mediums and apparatus are disclosed. In one aspect, a method of generating a graphical image on a display device is provided that includes splitting geometry level processing of the image between plural processors coupled to an interposer. Primitives are created using each of the plural processors. Any primitives not needed to render the image are discarded. The image is rasterized using each of the plural processors. A portion of the image is rendered using one of the plural processors and any remaining portion of the image using one or more of the other plural processors.06-06-2013
20130147813Software Constants File - Methods and systems relating to providing constants are provided. In an embodiment, a method of providing constants in a processing device includes copying a constant of a first constant buffer to a second constant buffer, the first and second constant buffers being included in a ring of constant buffers and a size of the ring being one greater than a number of processes that the processing device can process concurrently, updating a value of the constant in the second buffer, and binding a command to be executed on the processing device to the second constant buffer.06-13-2013
20130147814DYNAMIC LOAD BALANCING IN MULTIPLE VIDEO PROCESSING UNIT (VPU) SYSTEMS - Systems and methods are provided for processing data. The systems and methods include multiple processors that each couple to receive commands and data, where the commands and/or data correspond to frames of video that include multiple pixels. An interlink module is coupled to receive processed data corresponding to the frames from each of the processors. The interlink module divides a first frame into multiple frame portions by dividing pixels of the first frame using at least one balance point. The interlink module dynamically determines a position for the balance point that minimizes differences between the workload of the processors during processing of commands and/or data of one or more subsequent frames.06-13-2013
20130147815MULTI-PROCESSOR ARCHITECTURE AND METHOD - Embodiments of a multi-processor architecture and method are described herein. Embodiments provide alternatives to the use of an external bridge integrated circuit (IC) architecture. For example, an embodiment multiplexes a peripheral bus such that multiple processors can use one peripheral interface slot without requiring an external bridge IC. Embodiments are usable with known bus protocols.06-13-2013
20130155076DISPLAY DATA PROCESSING - Disclosed are methods and apparatus for processing display data. The display data specify one or more presentations (e.g., digital signage information) for displaying to a user on an end-user device (e.g., a digital signage device). One or more processors receive one or more criteria (e.g., “user preferences”) and the display data. The one or more processors select some or all of the received display data. This selection may be dependent upon the one or more criteria. The one or more processors then provide, for display by the end-user device, the selected display data. This provision may be dependent upon the one or more criteria.06-20-2013
20130162658SYNCHRONIZATION WITH SEMAPHORES IN A MULTI-ENGINE GPU - A method for performing an operation using more than one resource may include several steps: requesting an operation performed by a resource; populating a ring frame with an indirect buffer command packet corresponding to the operation using a method that may include for the resource requested to perform the operation, creating a semaphore object with a resource identifier and timestamp, in the event that the resource is found to be unavailable; inserting a command packet (wait) into the ring frame, wherein the command packet (wait) corresponds to the semaphore object; and submitting the ring frame to the graphics engine.06-27-2013
20130176319MULTI-USER MULTI-GPU RENDER SERVER APPARATUS AND METHODS - The invention provides, in some aspects, a system for rendering images, the system having one or more client digital data processors and a server digital data processor in communications coupling with the one or more client digital data processors, the server digital data processor having one or more graphics processing units. The system additionally comprises a render server module executing on the server digital data processor and in communications coupling with the graphics processing units, where the render server module issues a command in response to a request from a first client digital data processor. The graphics processing units on the server digital data processor simultaneously process image data in response to interleaved commands from (i) the render server module on behalf of the first client digital data processor, and (ii) one or more requests from (a) the render server module on behalf of any of the other client digital data processors, and (b) other functionality on the server digital data processor.07-11-2013
20130194282DISPLAY APPARATUS, UPGRADE APPARATUS, DISPLAY SYSTEM INCLUDING THE SAME AND CONTROL METHOD THEREOF - A display apparatus, an upgrade apparatus, a display system including the same, and a control method thereof are provided, the display apparatus including: a first image processor which processes an input image signal and outputs a first output signal; an upgrade apparatus connector which is connectable to an upgrade apparatus including a second image processor; a display which displays at least one of a first image corresponding to the first output signal and a second image corresponding to a second output signal output by the second image processor; a first storage which stores first apparatus information about the upgrade apparatus; and a first controller which sets the upgrade apparatus to a communication state based on the first apparatus information stored in the first storage if the upgrade apparatus is connected to the upgrade apparatus connector.08-01-2013
20130215124Media Action Script Acceleration Apparatus - Exemplary apparatus, method, and system embodiments provide for accelerated hardware processing of an action script for a graphical image for visual display. An exemplary apparatus comprises: a first memory; and a plurality of processors to separate the action script from other data, to convert a plurality of descriptive elements of the action script into a plurality of hardware-level operational or control codes, and to perform one or more operations corresponding to an operational code of the plurality of operational codes using corresponding data to generate pixel data for the graphical image. In an exemplary embodiment, at least one processor further is to parse the action script into the plurality of descriptive elements and the corresponding data, and to extract data from the action script and to store the extracted data in the first memory as a plurality of control words having the corresponding data in predetermined fields.08-22-2013
20130222397Media Action Script Acceleration Method - Exemplary apparatus, method, and system embodiments provide for accelerated hardware processing of an action script for a graphical image for visual display. An exemplary method comprises: converting a plurality of descriptive elements into a plurality of operational codes which at least partially control at least one processor circuit; and using at least one processor circuit, performing one or more operations corresponding to an operational code to generate pixel data for the graphical image. Another exemplary method for processing a data file which has not been fully compiled to a machine code and comprising interpretable descriptions of the graphical image in a non-pixel-bitmap form, comprises: separating the data file from other data; parsing and converting the data file to a plurality of hardware-level operational codes and corresponding data; and performing a plurality of operations in response to at least some hardware-level operational codes to generate pixel data for the graphical image. Exemplary embodiments also may be performed automatically by a system comprising one or more computing devices.08-29-2013
20130229420PERFORMANCE ALLOCATION METHOD AND APPARATUS - In accordance with some embodiments, a graphics process frame generation frame rate may be monitored in combination with a utilization or work load metric for the graphics process in order to allocate performance resources to the graphics process and in some cases, between the graphics process and a central processing unit.09-05-2013
20130278613SECONDARY GRAPHICS PROCESSOR CONTROL SYSTEM - A secondary graphics processor control system includes a secondary graphics processor. A controller is coupled to the secondary graphics processor. The controller detects the start of an application that is associated with a secondary graphics processor and then determines a power capability of a battery. The controller then either prevents enablement of the secondary graphics processor if the power capability is below a predetermined threshold such that only a primary graphics processor processes graphics for the application, or allows enablement of the secondary graphics processor if the power capability is above the predetermined threshold such that the secondary graphics processor processing graphics for the application. The primary graphics processor may be an integrated graphics processing unit (iGPU) provided by a system processor that is mounted to a board, and the secondary graphics processor may be a discrete graphics processing unit (dGPU) that is coupled to the board.10-24-2013
20130314425DISTRIBUTION OF TASKS AMONG ASYMMETRIC PROCESSING ELEMENTS - Techniques to control power and processing among a plurality of asymmetric cores. In one embodiment, one or more asymmetric cores are power managed to migrate processes or threads among a plurality of cores according to the performance and power needs of the system11-28-2013
20130328891SYSTEM AND METHOD FOR PROVIDING LOW LATENCY TO APPLICATIONS USING HETEROGENEOUS PROCESSORS - Methods, apparatuses, and computer readable media are disclosed for responding to requests. A method of responding to requests may include receiving requests comprising callback functions. The one or more requests may be received in a first memory associated with processors of a first type, which may be CPUs. The requests may be moved to a second memory. The second memory may be associated with processors of a second type, which may be GPUs. GPU threads may process the requests to determine a result for the requests, when a number of the requests is at least a threshold number. The method may include moving the results to the first memory. The method may include the CPUs executing the one or more callback functions with the corresponding result. A GPU persistent thread may check the number of requests to determine when a threshold number of requests is reached.12-12-2013
20140002465METHOD AND APPARATUS FOR MANAGING IMAGE DATA FOR PRESENTATION ON A DISPLAY01-02-2014
20140009476AUGMENTATION OF MULTIMEDIA CONSUMPTION - Disclosed are methods and apparatus for augmenting a user's multimedia consumption experience. The methods comprise whilst the user is consuming the multimedia presentation using a first device, that device provides (to one or more remote processors) information that may be used to identify a relevant location. The one or more processors use this information to identify the location and acquire a virtual environment. This virtual environment may be a virtual representation the location. The virtual environment is presented to the user on a second (companion) device. Using the second device, the user may explore the virtual environment and interact with virtual objects therein.01-09-2014
20140035936METHODS AND APPARATUS FOR PROCESSING GRAPHICS DATA USING MULTIPLE PROCESSING CIRCUITS - Methods and apparatus for providing multiple graphics processing capacity, while utilizing unused integrated graphics processing circuitry on a bridge circuit along with an external or discrete graphics processing unit is disclosed. In particular, a bridge circuit includes an integrated graphics processing circuit configured to process graphics jobs. The bridge circuit also includes an interface operable according to interface with a discrete graphics processing circuit. A controller is included with the bridge circuit and responsive whenever the discrete graphics processing circuit is coupled to the interface to cause the integrated graphics processing circuit to process a task of the graphics job in conjunction with operation of the discrete graphics processing circuit that is operable to process another task of the graphics job. Corresponding methods are also disclosed.02-06-2014
20140043343IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING INTERFACE CIRCUIT - An image processing apparatus includes a plurality of image processing module parts, a module arbiter part, and a DMAC (Direct Memory Access Controller) part. Each of the image processing module parts includes a module core for executing a predetermined image processing. The plurality of image processing module parts is connected to the module arbiter part. The module arbiter part arbitrates memory access which is given by the plurality of image processing module parts through a bus. The DMAC part is connected between the module arbiter part and the bus, and executes memory access related to the arbitration result obtained by the module arbiter part.02-13-2014
20140043344TECHNIQUES FOR A SECURE GRAPHICS ARCHITECTURE - Techniques for implementing a secure graphics architecture are described. In one embodiment, for example, an apparatus may comprise a processor circuit and a graphics management module, and the graphics management module may be operative to receive graphics information from the processor circuit, generate graphics processing information based on the graphics information, and send the graphics processing information to a graphics processor circuit arranged to generate graphics display information based on the graphics processing information. In this manner, security threats such as screen capture attacks and/or theft of content protected media streams may be reduced. Other embodiments may be described and claimed.02-13-2014
20140049548MEMORY SHARING VIA A UNIFIED MEMORY ARCHITECTURE - A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.02-20-2014
20140055464HARDWARE ACCELERATION FOR REMOTE DESKTOP PROTOCOL - A method for offloading remote terminal services processing tasks to a peripheral device that would otherwise be performed in a computer system's processor and memory. In one embodiment, the disclosed method is utilized in a layered network model, wherein computing tasks that are typically performed in network applications are instead offloaded to a peripheral such as a network interface card (NIC).02-27-2014
20140085318Multi-GPU FISTA Implementation for MR Reconstruction with Non-Uniform K-Space Sampling - A system for performing image reconstruction in a multi-threaded computing environment includes one or more central processing units executing a plurality of k-space components and a plurality of graphic processing units executing a reconstruction component. The k-space components executing on the central processing units include a k-space sample data component operating in a first thread and configured to receive k-space sample data from a first file interface; a k-space sample coordinate data component operating in a second thread and configured to receive k-space sample coordinate data from a second file interface; and a k-space sample weight data component operating in a third thread and configured to retrieve k-space sample weight data from a third file interface. The reconstruction component is configured to receive one or more k-space input data buffers comprising the k-space sample data, the k-space sample coordinate data, and the k-space sample weight data from the one or more central processing units, and reconstruct an image based on the input data buffers using an iterative reconstruction algorithm.03-27-2014
20140092104BOOTING METHOD AND ELECTRONIC DEVICE - A booting method for an electronic device having a display device is provided. The method includes steps of receiving a booting signal and, according to the booting signal, a booting procedure is performed. The booting procedure comprises activating a basic input/output system to read a graphic device variable. According to the graphic device variable, a system configuration of the electronic device is set so that the booting procedure is performed with a first graphic device corresponding to the graphic device variable. When the display device displays a booting frame, an operating system is activated. When there is no frame shown on the display device and a hot-key signal is received, the graphic device variable is rewritten according to the hot-key signal so that the graphic device variable corresponds to a second graphic device. A rebooting procedure is performed on the electronic device.04-03-2014
20140098111DATA PROCESSING SYSTEM FOR TRANSMITTING COMPRESSED DISPLAY DATA OVER DISPLAY INTERFACE - A data processing system has a first data processing apparatus and a second data processing apparatus. The first data processing apparatus includes a first controller, a display processor, a compressor and an output interface. The first controller controls the first data processing apparatus. The display processor generates a first input display data. The compressor generates a compressed display data according to the first input display data. The output interface packs the compressed display data into a bitstream, and outputs the bitstream via a display interface. The second data processing apparatus includes an input interface, a second controller, a display buffer and a de-compressor. The input interface un-packs the bitstream into a second input display data. The second controller controls the second data processing apparatus. The display buffer buffers the second input display data and outputs a buffered display data. The de-compressor de-compresses the buffered display data.04-10-2014
20140104285PARALLEL FLOOD-FILL TECHNIQUES AND ARCHITECTURE - Flood-fill techniques and architecture are disclosed. In accordance with one embodiment, the architecture comprises a hardware primitive with a software interface which collectively allow for both data-based and task-based parallelism in executing a flood-fill process. The hardware primitive is defined to do the flood-fill function and is scalable and may be implemented with a bitwise definition that can be tuned to meet power/performance targets, in some embodiments. In executing a flood-fill operation, and in accordance with an example embodiment, the software interface produces parallel threads and issues them to processing elements, such that each of the threads can run independently until done. Each processing element in turn accesses a flood-fill hardware primitive, each of which is configured to flood a seed inside an N×M image block. In some cases, processing element commands to the flood-fill hardware primitive(s) can be queued and acted upon pursuant to an arbitration scheme.04-17-2014
20140125680METHOD FOR GRAPHICS DRIVER LEVEL DECOUPLED RENDERING AND DISPLAY - The invention provides a method for driving a graphic processing unit (GPU), where a driver applies two threads to drive one ore more GPUs. The method includes the steps of: (a) activating a rendering thread and a displaying thread in response to invoking by an application thread of a graphics application; (b) sending according to the rendering thread a plurality of rendering instructions for enabling generation of at least a first rendered frame and a second rendered frame; and (c) sending according to the displaying thread one or more interpolating instructions and one or more displaying instructions, the one or more interpolating instructions enabling execution of interpolation according to the at least a first rendered frame and the second rendered frame to create one or more interpolated frames, and the one or more displaying instructions enabling display of the one or more interpolated frames.05-08-2014
20140132611SYSTEM AND METHOD FOR DATA TRANSMISSION - The present invention discloses a system and a method for data transmission. The system includes: a plurality of graphics processing units; a global shared memory for storing data transmitted among the plurality of graphics processing units; an arbitration circuit module, which is coupled to each of the plurality of graphics processing units and the global shared memory and configured to arbitrate an access request to the global shared memory from respective graphics processing units to avoid an access conflict among the plurality of graphics processing units. The system and the method for data transmission provided by the present invention enable respective GPUs in the system to transmit data through the global shared memory rather than a PCIE interface, thus saving data transmission bandwidth significantly and further improving a computing speed.05-15-2014
20140139532Framework For Dynamic Configuration Of Hardware Resources - Among other things, dynamically selecting or configuring one or more hardware resources to render a particular display data includes obtaining a request for rendering display data. The request includes a specification describing a desired rendering process. Based on the specification and the display data, hardware is selected or configured. The display data is rendered using the selected or configured hardware.05-22-2014
20140152674ELECTRONIC APPARATUS, EXTERNAL APPARATUS AND METHOD OF CONTROLLING THE SAME - An electronic apparatus includes a locking unit to selectively lock a physical connection with an external apparatus and a control unit to control an operation mode of the electronic apparatus, according to a connection state with the external apparatus, in which the control unit controls the locking unit to lock the connection with the external apparatus, when the electronic apparatus is in an operation mode of using a graphic processing unit of the external apparatus.06-05-2014
20140168227SYSTEM AND METHOD FOR VERSIONING BUFFER STATES AND GRAPHICS PROCESSING UNIT INCORPORATING THE SAME - A system and method for versioning states of a buffer. In one embodiment, the system includes: (1) a page table lookup and coalesce circuit operable to provide a page table directory request for a translatable virtual address of the buffer to a page table stored in a virtual address space and (2) a page directory processing circuit associated with the page table lookup and coalesce circuit and operable to provide a translated virtual address based on the virtual address and a page table load response received from the page table.06-19-2014
20140176572Offloading Touch Processing To A Graphics Processor - In an embodiment, a processor includes a graphics domain including a graphics engines each having at least one execution unit. The graphics domain is to schedule a touch application offloaded from a core domain to at least one of the plurality of graphics engines. The touch application is to execute responsive to an update to a doorbell location in a system memory coupled to the processor, where the doorbell location is written responsive to a user input to the touch input device. Other embodiments are described and claimed.06-26-2014
20140176573Offloading Touch Processing To A Graphics Processor - In an embodiment, a processor includes a graphics domain including a graphics engines each having at least one execution unit. The graphics domain is to schedule a touch application offloaded from a core domain to at least one of the plurality of graphics engines. The touch application is to execute responsive to an update to a doorbell location in a system memory coupled to the processor, where the doorbell location is written responsive to a user input to the touch input device. Other embodiments are described and claimed.06-26-2014
20140192063CONTROLLING EMBEDDED IMAGE DATA IN A SMART DISPLAY - A method and apparatus for controlling embedded raw image data within a display having internal memory. The method includes sending a frame of code and a final compilation of raw image data to the internal memory of the display from a primary host processor prior to the primary host processor entering a sleep state. When the primary host processor has entered a sleep state, control of the raw image data is redirected to at least one secondary host processor. The secondary host processor reads the frame of code within the internal memory of the display and instructs the display to perform an image-related behavior output that may include updating the display itself based on the frame of code found in the internal memory of the display.07-10-2014
20140192064LOW-POWER GPU STATES FOR REDUCING POWER CONSUMPTION - The disclosed embodiments provide a system that drives a display from a computer system. During operation, the system detects an idle state in a first graphics-processing unit (GPU) used to drive the display. During the idle state, the system switches from using the first GPU to using a second GPU to drive the display and places the first GPU into a low-power state, wherein the low-power state reduces a power consumption of the computer system.07-10-2014
20140204098SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR GRAPHICS PROCESSING UNIT (GPU) DEMAND PAGING - A system, method, and computer program product are provided for GPU demand paging. In operation, input data is addressed in terms of a virtual address space. Additionally, the input data is organized into one or more pages of data. Further, the input data organized as the one or more pages of data is at least temporarily stored in a physical cache. In addition, access to the input data in the physical cache is facilitated.07-24-2014
20140218376SYSTEM AND METHOD FOR IMAGE PROCESSING - A system and method for image processing are provided. The system comprises a main computing device and a secondary computing device. The main computing device comprises a main graphics card and a main central processing unit, and the secondary computing device comprises a secondary graphics card and a secondary central processing unit. The main computing device is configured to detect the secondary computing device. The main central processing unit is configured to send a request to process raw image data together to the secondary central processing unit and allocate the raw image data to the main graphics card and the secondary graphics card after receiving a response from the secondary central processing unit. The main graphics card and the secondary graphics card are configured to process images based on the allocation of the main central processing unit. The system and method for image processing provided by the present invention can take full advantage of graphics cards located in different computing devices and enable these graphics cards to work together to accelerate image processing.08-07-2014
20140240325INCREASED EXPANSION PORT UTILIZATION IN A MOTHERBOARD OF A DATA PROCESSING DEVICE BY A GRAPHICS PROCESSING UNIT (GPU) THEREOF - A method includes abstracting, through a driver component, a Graphics Processing Unit (GPU) of a data processing device as a set of GPUs. The GPU is configured to be received in an expansion port on a motherboard of the data processing device. The method also includes enabling, through the abstraction, utilization of a more number of lanes on the expansion port and/or another expansion port on the motherboard of the data processing device than a capability of the GPU otherwise.08-28-2014
20140240326Method, Apparatus, System For Representing, Specifying And Using Deadlines - In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated order identifier and a deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values. Other embodiments are described and claimed.08-28-2014
20140253564SYSTEMS AND METHODS FOR HOT PLUG GPU POWER CONTROL - Systems and methods include an electronic device having multiple GPUs and a GPU power control process that controls switching between a first GPU and a second GPU, such as a high performance GPU. The electronic device uses the first GPU when an external device is coupled to an adapter connected to the electronic device.09-11-2014
20140285499RENDERING SYSTEM, RENDERING SERVER, CONTROL METHOD THEREOF, PROGRAM, AND RECORDING MEDIUM - One device generates a first screen by executing some processes including a first process of rendering processing of the screen to be displayed in accordance with the information required to determine the rendered contents. On the other hand, devices except for the one device generates a second screen by executing some processes, which do not include the first process but include a second process different from the first process, of the rendering processing of the screen to be displayed in accordance with that information, and sends the second screen to the one device. Then, the one device receives the second screens generated by the respective devices except for the one device, and generates the screen to be displayed by compositing the first and second screens.09-25-2014
20140292773VIRTUALIZATION METHOD OF VERTICAL-SYNCHRONIZATION IN GRAPHICS SYSTEMS - A method for rendering frames in graphic systems includes not displaying at least one frame in a sequence of frames.10-02-2014
20140306966VIDEO WALL - A video wall, having screens, cables, synchronization detection modules and a central control unit. The cables are operative to carry image data to be displayed on the screens. The synchronization detection modules are coupled between the screens and the cables for detection of a feature symbol. The central control unit collects detection results from the synchronization detection modules, and the detection results are utilized in the adjustment of the image data before cable transmission. In this manner, image display synchronization between the screens is achieved. The synchronization detection modules may be implemented as connectors, each having a first end connected to a screen and a second end connected to a cable.10-16-2014
20140340410METHOD AND SERVER FOR SHARING GRAPHICS PROCESSING UNIT RESOURCES - A method for sharing graphics processing unit (GPU) resources between a first server and a second server, each of the server comprising a video adapter, the adapter comprising a GPU. The second server receives a IP address of the first server, when the first server has a load rate less than the predetermined value and the second server has a load rate greater than the predetermined value; and the second server packages pending image data and transmitting the packaged image data to the first server for processing.11-20-2014
20140347372LOAD BALANCING BETWEEN GENERAL PURPOSE PROCESSORS AND GRAPHICS PROCESSORS - Disclosed are various embodiments for facilitating load balancing between central processing units (CPUs) and graphics processing units (GPUs). A request is obtained to execute a first application in one or more computing devices. In one embodiment, a second application associated with the first application is assigned to be executed in GPUs of the one or more computing devices instead of CPUs of the one or more computing devices when a resource usage profile associated with the first application indicates that the first application imposes a greater CPU load than GPU load. Conversely, the second application is assigned to be executed in the CPUs instead of the GPUs when the resource usage profile indicates that the first application imposes a greater GPU load than CPU load.11-27-2014
20140368514DEVICE, METHOD AND SYSTEM FOR PROCESSING AN IMAGE DATA STREAM - An implementation relates to a device for processing an image data stream. The device may include a first processing unit and a second processing unit for receiving the image data stream. The first processing unit may be arranged for providing a first data stream, the first data stream has a reduced bandwidth compared to the image data stream. The second processing unit may arranged for providing a second data stream, the second data stream has a reduced bandwidth compared to the image data stream.12-18-2014
20140375657Synchronization Points for State Information - Techniques for synchronization points for state information are described. In at least some embodiments, synchronization points are employed to propagate state information among different processing threads. A synchronization point, for example, can be employed to propagate state information among different independently-executing threads. Accordingly, in at least some embodiments, synchronization points serve as inter-thread communications among different independently-executing threads.12-25-2014
20150009221DIRECT INTERFACING OF AN EXTERNAL GRAPHICS CARD TO A DATA PROCESSING DEVICE AT A MOTHERBOARD-LEVEL - A method includes providing an Input/Output (I/O) interface at a periphery of a motherboard of a data processing device, and providing traces between a processor of the data processing device and the I/O interface across a surface of the motherboard. The traces provide conductive pathways between circuits of the processor and the I/O interface. The method also includes exposing the I/O interface through an external cosmetic surface of the data processing device in an assembled state thereof by way of a port complementary to that of a port of an external graphics card to enable direct coupling of the external graphics card to the data processing device through the exposed I/O interface by way of the complementary ports to provide boosting of processing through the data processing device.01-08-2015
20150015590DISPLAY DEVICE, DATA PROCESSING APPARATUS, AND RELATED METHOD - A data processing apparatus includes a diagonal detector, a first processor, and a second processor. The diagonal detector may determine whether a red-blue data set includes data for controlling a display device to display any diagonal line, the display device including subpixels arranged in first-type lines and second-type lines that are alternately disposed, the red-blue data set including 9 data values that correspond to 9 subpixels among the subpixels, the 9 subpixels forming a 3-by-3 array that includes a center subpixel, the 9 data values including a center data value that corresponds to the center subpixel. The first/second processor may process the center data value using a first/second coefficient to produce a first/second value that corresponds to the center subpixel if the center subpixel is in the first-type/second-type lines.01-15-2015
20150029199ELECTRONIC DEVICE WITH DIFFERENT PROCESSING MODES - An electronic device can connect to and communicate with multiple independent load medias. The electronic device comprises a processor and a first switch module. The processor is capable of switching between a first working mode and a second working mode. Under the second working mode, the processor can generate a second control signal whereby independent connections are established between at least two load medias and the first switch module. The processor processes the signals from the selected load medias simultaneously.01-29-2015
20150035840USING GROUP PAGE FAULT DESCRIPTORS TO HANDLE CONTEXT SWITCHES AND PROCESS TERMINATIONS IN GRAPHICS PROCESSORS - Methods and systems may provide for detecting an end of execution of a process on a graphics processor and providing a group page fault descriptor to a page miss handler of an operating system (OS) in response to the end of execution of the process, wherein the group page fault descriptor may indicate to the page miss handler that no further page fault requests will be generated by the graphics processor until one or more outstanding page fault requests are satisfied. Additionally, a response to the group page fault descriptor may be received from the page miss handler. In one example, a process identifier is incorporated into the group page fault descriptor, wherein the process identifier is shared by the group page fault descriptor and the one or more outstanding page fault requests.02-05-2015
20150042664SCALE-UP TECHNIQUES FOR MULTI-GPU PASSTHROUGH - A device for processing graphics data includes a plurality of graphics processing units. Each graphics processing unit may correspond to a virtualized operating system. Each graphics processing unit may include a configuration register indicating a 3D class code and a command register indicating that I/O cycle decoding is disabled. The device may be configured to transmit a configuration register value to a virtualized operating system indicating a VGA-compatible class code. The device may be configured to transmit a command register value to the virtualized operating system that indicates that I/O cycle decoding is enabled. In this manner, legacy bus architecture of the device may not limit the number of graphics processing units deployed in the device.02-12-2015
20150049094MULTI GPU INTERCONNECT TECHNIQUES - A graphics processing subsystem includes one or more memory devices and two or more graphics processing units (GPU). The graphics processing units each include a memory interface. A first sub-set of the memory interface of the first graphics processing unit communicatively couples the first graphics processing unit to the first memory device. A first sub-set of the memory interface of the second graphics processing unit is connected to a second sub-set of the memory interface of the first graphics processing unit.02-19-2015
20150070363MULTIMEDIA DATA PROCESSING METHOD IN GENERAL PURPOSE PROGRAMMABLE COMPUTING DEVICE AND DATA PROCESSING SYSTEM ACCORDING TO THE MULTIMEDIA DATA PROCESSING METHOD - A method of processing multimedia data includes: separating a defined application kernel into a data patch kernel and a data processing kernel; requesting, by the data processing kernel, access to patch data of the multimedia data, from the data patch kernel; performing, by the data patch kernel, an operation that is independent of the request and preparing data for the data access based on the request; and performing, by the data processing kernel, an arithmetic operation on work items of the prepared data when the data has been prepared by the data patch kernel.03-12-2015
20150077422PARALLEL FLOOD-FILL TECHNIQUES AND ARCHITECTURE - Flood-fill techniques and architecture are disclosed. In accordance with one embodiment, the architecture comprises a hardware primitive with a software interface which collectively allow for both data-based and task-based parallelism in executing a flood-fill process. The hardware primitive is defined to do the flood-fill function and is scalable and may be implemented with a bitwise definition that can be tuned to meet power/performance targets, in some embodiments. In executing a flood-fill operation, and in accordance with an example embodiment, the software interface produces parallel threads and issues them to processing elements, such that each of the threads can run independently until done. Each processing element in turn accesses a flood-fill hardware primitive, each of which is configured to flood a seed inside an N×M image block. In some cases, processing element commands to the flood-fill hardware primitive(s) can be queued and acted upon pursuant to an arbitration scheme.03-19-2015
20150123977LOW LATENCY AND HIGH PERFORMANCE SYNCHRONIZATION MECHANISM AMONGST PIXEL PIPE UNITS - A method for synchronizing a plurality of pixel processing units is disclosed. The method includes sending a first trigger to a first pixel processing unit to execute a first operation on a portion of a frame of data. The method also includes sending a second trigger to a second pixel processing unit to execute a second operation on the portion of the frame of data when the first operation has completed. The first operation has completed when the first operation reaches a sub-frame boundary.05-07-2015
20150123978Shared Virtual Memory - Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.05-07-2015
20150130820HYBRID RENDERING SYSTEMS AND METHODS - Embodiments of a system and method for enhanced graphics rendering performance in a hybrid computer system are generally described herein. In some embodiments, a graphical element in a frame, application, or web page, which is to be presented to a user via a web browser, is rendered either by a first processor or a second processor based on indications of whether the first or the second processor is equipped or configured to provide faster rendering. A rendering engine may utilize either processor based on historical or anticipated rendering performance, and may dynamically switch between the hardware decoder and general purpose processor to achieve rendering time performance improvement. Switches between processors may be limited to a fixed number switches or switching frequency.05-14-2015
20150301585DATA PROCESSING METHOD, DATA PROCESSING APPARATUS, AND STORAGE MEDIUM - A data processing method performed by a first processor configured to control a second processor that performs a process of creating an image and has a plurality of operation modes with different power consumption levels, the data processing method includes setting a number related to a second period following a first period based on a number of first images created by the second processor during the first period; and switching an operation mode of the second processor to an operation mode in which power consumption is lower than a power consumption of an operation mode during creating the image, among the plurality of operation modes, when the number of second images created during the second period reaches the set number.10-22-2015
20150301586CONTROL METHOD AND INFORMATION PROCESSING DEVICE - A control method executed by an information processing device including a first processor and a second processor includes specifying a plurality of processes which issued orders for causing the first processor to execute a drawing process for drawing a frame, the plurality of processes being executed by the second processor, first determining whether the drawing process is completed, based on a comparison between the specified plurality of processes and specific processes, and controlling, based on a result of the first determining, a state regarding a power consumption of the first processor until the first processor starts another drawing process for drawing another frame.10-22-2015
20150310580INTELLIGENT GPU MEMORY PRE-FETCHING AND GPU TRANSLATION LOOKASIDE BUFFER MANAGEMENT - A method and apparatus of a device that manages virtual memory for a graphics processing unit is described. In an exemplary embodiment, the device performs translation lookaside buffer coherency for a translation lookaside buffer of the graphics processing unit of the device. In this embodiment, the device receives a request to remove an entry of the translation lookaside buffer of the graphics processing unit, where the device includes a central processing unit and the graphics processing unit. In addition, the entry includes a translation of virtual memory address of a process to a physical memory address of system memory of a central processing unit and the graphics processing unit is executing a compute task of the process. The device locates the entry in the translation lookaside buffer and removes the entry.10-29-2015
20150379670DATA DISTRIBUTION FABRIC IN SCALABLE GPUs - In on embodiment, a hybrid fabric interconnects multiple graphics processor cores within a processor. The hybrid fabric interconnect includes multiple data channels, including programmable virtual data channels. The virtual data channels carry multiple traffic classes of packet-based messages. The virtual data channels and multiple traffic classes may be assigned one of multiple priorities. The virtual data channels may be arbitrated independently. The hybrid fabric is scalable and can support multiple topologies, including multiple stacked integrated circuit topologies.12-31-2015
20150379737Virtual Memory Supported Compression Control Surfaces - Data destined for memory, i.e., data that was evicted at some level in the cache hierarchy is intercepted and subjected to compression before being sent to memory. Thereby, when the compression is successful, the memory bandwidth requirement is reduced, potentially resulting in higher performance and/or energy efficiency in some embodiments.12-31-2015
20160063664DISPLAY DEVICE WITH MULTI-PROCESSOR, CONTROL METHOD FOR THE SAME, AND STORAGE MEDIUM HAVING CONTROL PROGRAM STORED THEREON - A display device including a first processor, a second processor, a frame memory which is overwritten by and stores first image data generated by the first processor and second image data generated by the second processor, and a display on which a predetermined image is displayed based on the first image data and the second image data stored in the frame memory, in which the first processor or the second processor sets the first processor in a power saving state when the first processor is not overwriting the first image data in the frame memory.03-03-2016
20160117793Systems And Methods For Orchestrating External Graphics - Systems and methods that may be implemented to orchestrate external graphics, for example to support and extend switchable graphics capability beyond internal system components of a host information handling system so as to include an external discrete graphics processing unit (xGPU) that is not integrated or embedded within the chassis enclosure of the host information handling system, and that is coupled to the host information handling system from outside the host system chassis enclosure.04-28-2016
20160132280IMAGE TRANSMISSION SYSTEM AND IMAGE TRANSMISSION METHOD - Provided is an image transmission system including an image control device, and at least two signal processing devices. The signal processing devices each include an image receiver configured to selectively receive one or more images transmitted using multicast based on image control information transmitted from the image control device, one or more image processing units configured to perform an image process on an image received by the image receiver based on the image control information, and an image sender configured to transmit an image subjected to the image process by the image processing unit based on the image control information, the image being transmitted using multicast.05-12-2016
20160148337GRAPHICS PROCESSING SYSTEMS - A graphics processing core of a tile-based graphics processing system when processing a tile of a graphics output reads a primitive to be processed off a tile list for the tile being processed, along with an identifier for that primitive. The graphics processing core then checks whether or not the identifier matches the identifier stored for any entry stored in a primitive data cache. A match indicates that primitive-specific data (including line equations, depth equations and barycentric equations) for the primitive to be processed is stored in the cache. If a match is found then the stored primitive-specific data is retrieved and used to process (rasterise and render) the primitive. If no match is found, primitive-specific data is calculated from scratch, stored in the primitive data cache, and used to process the primitive.05-26-2016
20160163014PREDICTION BASED PRIMITIVE SORTING FOR TILE BASED RENDERING - Techniques related to graphics rendering are discussed. Such techniques may include predicting primitive intersection information for tiles of a frame, rendering the frame on a tile-by-tile basis based on the predicted primitive intersection information, and re-rendering any tiles with predicted primitive errors.06-09-2016
20160189681Ordering Mechanism for Offload Graphics Scheduling - Described herein are technologies related to a ensuring that graphics commands and graphics context are offloading and scheduled for consumption as the commands and graphics context are sent from coherent to non-coherent memory/fabric in a “processor to processor” handoff or transaction.06-30-2016
20160203580Memory Sharing Via a Unified Memory Architecture07-14-2016
20170236242Ultra high resolution pan-scan on displays connected across multiple systems/GPUs08-17-2017

Patent applications in class Plural graphics processors

Patent applications in all subclasses Plural graphics processors

Website © 2023 Advameg, Inc.