Class / Patent application number | Description | Number of patent applications / Date published |
345503000 | Coprocessor (e.g., graphic accelerator) | 48 |
20080246771 | GRAPHICS PROCESSING SYSTEM AND METHOD - A graphics processing system and method for displaying an image having first and second graphics processors coupled together. A main display device is coupled to the first graphics processor. Secondary display devices are coupled to the second graphics processor. The first and second graphics processors are operable to share processing for displaying the image on the main display, or the first graphics processor is operable to process a main portion of the image for displaying the main portion of the image on the main display and the second graphics processor is operable to process secondary portions of the image for displaying the secondary portions of the image on the secondary display devices. | 10-09-2008 |
20080291207 | PRESHADERS: OPTIMIZATION OF GPU PRO - A shader program capable of execution on a GPU is analyzed for constant expressions. These constant expressions are replaced with references to registers or memory addresses on the GPU. A preshader is created that comprises two executable files. The first executable file contains the shader program with the each constant expression removed and replaced with a unique reference accessible by the GPU. The first file is executable at the GPU. A second file contains the removed constant expressions along with instructions to place the values generated at the associated reference. The second executable file is executable at a CPU. When the preshader is executed, an instance of the first file is executed at the GPU for each vertex or pixel that is displayed. One instance of the second file is executed at the CPU. As the preshader is executed, the constant expressions in the second file are evaluated and the resulting intermediate values are passed to each instance of the first file on the GPU. | 11-27-2008 |
20090066705 | System and method for an information handling system having an external graphics processor system for operating multiple monitors - Methods and systems are disclosed for an information handling system comprising an internal graphics system and an external graphics system, wherein both the internal and external graphics systems may operate simultaneously to support multiple monitors. The internal graphics system may be provided, for example, from a notebook computer. The external graphics system may comprise a pass thru port providing graphics from the internal graphics to a first monitor simultaneously with a graphics card of the external graphics system supporting a second monitor. The external graphics system can support two monitors, as well. HDTV can be supported instead of one of the monitors supported by the external graphics system. The system which contains internal graphics capabilities may include an Express card socket, wherein an external graphics processor unit of the external graphics system is coupled to Express card socket. | 03-12-2009 |
20090160866 | METHOD AND SYSTEM FOR VIDEO DECODING BY MEANS OF A GRAPHIC PIPELINE, COMPUTER PROGRAM PRODUCT THEREFOR - A system for decoding a stream of compressed digital video images comprises a graphics accelerator for reading the stream of compressed digital video images, creating, starting from said stream of compressed digital video images, three-dimensional scenes to be rendered, and converting the three-dimensional scenes to be rendered into decoded video images. The graphics accelerator is preferentially configured as pipeline selectively switchable between operation in a graphics context and operation for decoding the stream of video images. The graphics accelerator is controllable during operation for decoding the stream of compressed digital video images via a set of application programming interfaces comprising, in addition to new APIs, also standard APIs for operation of the graphics accelerator in a graphics context. | 06-25-2009 |
20090213127 | GUIDED ATTACHMENT OF ACCELERATORS TO COMPUTER SYSTEMS - A method of guided attachment of hardware accelerators to slots of a computing system includes dividing a first group of hardware accelerators into a plurality of priority classes, dividing a first group of slots of the computing system into a plurality of hierarchical tiers, and attaching each hardware accelerator of the first group of hardware accelerators to a slot matched to the hardware accelerators based on comparison of a priority class of the hardware accelerator and a hierarchical tier of the slot. | 08-27-2009 |
20090213128 | SYSTEM AND METHOD FOR INSTRUCTION LATENCY REDUCTION IN GRAPHICS PROCESSING - A system, method and apparatus are disclosed, in which an instruction scheduler of a compiler, e.g., a shader compiler, reduces instruction latency based on a determined instruction distance between a dependent predecessor and successor instructions. | 08-27-2009 |
20100201695 | SYSTEM AND METHOD FOR OPTIMIZING A GRAPHICS INTENSIVE SOFTWARE PROGRAM FOR THE USER'S GRAPHICS HARDWARE - A system and method for optimizing the performance of a graphics intensive software program for graphics acceleration hardware. This system and method encompasses a procedure that validates the different functions of a | 08-12-2010 |
20110050712 | Extension To A Hypervisor That Utilizes Graphics Hardware On A Host - Graphics rendering in a virtual machine system is accelerated by utilizing host graphics hardware. In one embodiment, the virtual machine system includes a server that hosts a plurality of virtual machines. The server includes one or more graphics processing units. Each graphics processing unit can be allocated to multiple virtual machines to render images. A hypervisor that runs on the server is extended to include a redirection module, which receives a rendering request from a virtual machine and redirects the rendering request to a graphics driver. The graphics driver can commands an allocated portion of a graphics processing unit to render an image on the server. | 03-03-2011 |
20110063305 | CO-PROCESSING TECHNIQUES ON HETEROGENEOUS GRAPHICS PROCESSING UNITS - The graphics co-processing technique includes loading and initializing a device driver interface and a device specific kernel mode driver for a graphics processing unit on a primary adapter. A device driver interface and a device specific kernel mode driver for a graphics processing unit on an unattached adapter is also loaded and initialized without the device driver interface talking back to a runtime application programming interface or a thunk layer when a particular versions of an operating system will not allow the device driver interface on the unattached adapter to be loaded. | 03-17-2011 |
20110063306 | CO-PROCESSING TECHNIQUES ON HETEROGENEOUS GPUs INCLUDING IDENTIFYING ONE GPU AS A NON-GRAPHICS DEVICE - The graphics co-processing technique includes loading a device specific kernel mode driver of a second graphics processing unit tagged as a non-graphics device. A device driver interface and a device specific kernel mode driver is loaded and initialized for a graphics processing unit on a primary adapter. A device driver interface and a device specific kernel mode driver for a graphics processing unit on the non-graphics tagged adapter is also loaded and initialized without the device driver interface talking back to a runtime application programming interface when a particular version of an operating system would not otherwise allow the device specific kernel mode driver for the second graphics processing unit to be loaded. | 03-17-2011 |
20110063307 | SYSTEM AND METHOD FOR A FAST, PROGRAMMABLE PACKET PROCESSING SYSTEM - The present invention provides a cost effective method to improve the performance of communication appliances by retargeting the graphics processing unit as a coprocessor to accelerate networking operations. A system and method is disclosed for using a coprocessor on a standard personal computer to accelerate packet processing operations common to network appliances. The appliances include but are not limited to routers, switches, load balancers and Unified Threat Management appliances. More specifically, the method uses common advanced graphics processor engines to accelerate the packet processing tasks. | 03-17-2011 |
20110115801 | PERSONAL ELECTRONIC DEVICE WITH DISPLAY SWITCHING - A novel personal electronic device includes a first (embedded) and second (non-embedded) processors including associated operating systems and functions. In one aspect, the first processor performs relatively limited functions, while the second processor performs relatively broader functions under control of the first processor. Often the second processor requires more power than the first processor and is selectively operated by the first processor to minimize overall power consumption. Protocols for functions to be performed by the second processor may be provided directly to the second processor and processed by the second processor. In another aspect, a display controller is designed to interface with both processors. In another aspect, the operating systems work with one another. In another aspect, the first processor employs a thermal control program. Advantages of the invention include a broad array of functions performed by a relatively small personal electronic device. | 05-19-2011 |
20110128293 | System Co-Processor - Embodiments of the invention provide assigning two different class identifiers to a device to allow loading to an operating system as different devices. The device may be a graphics device. The graphics device may be integrated in various configurations, including but not limited to a central processing unit, chipset and so forth. The processor or chipset may be associated with a first identifier associated with a graphics processor and a second device identifier that enables the processor or chipset as a co-processor. | 06-02-2011 |
20110157191 | METHOD AND SYSTEM FOR ARTIFICALLY AND DYNAMICALLY LIMITING THE FRAMERATE OF A GRAPHICS PROCESSING UNIT - Embodiments of the present invention are directed to provide a method and system for applying automatic power conservation techniques in a computing system. Embodiments are described herein that automatically limits the frame rate of an application executing in a discrete graphics processing unit operating off battery or other such exhaustible power source. By automatically limiting the frame rate in certain detected circumstances, the rate of power consumption, and thus, the life of the current charge stored in a battery may be dramatically extended. Another embodiment is also provided which allows for the more effective application of automatic power conservation techniques during detected periods of inactivity by applying a low power state immediately after a last packet of a frame is rendered and displayed. | 06-30-2011 |
20110164046 | POLICY-BASED SWITCHING BETWEEN GRAPHICS-PROCESSING UNITS - The disclosed embodiments provide a system that configures a computer system to switch between graphics-processing units (GPUs). In one embodiment, the system drives a display using a first graphics-processing unit (GPU) in the computer system. Next, the system detects one or more events associated with one or more dependencies on a second GPU in the computer system. Finally, in response to the event, the system prepares to switch from the first GPU to the second GPU as a signal source for driving the display. | 07-07-2011 |
20110210975 | MULTI-SCREEN SIGNAL PROCESSING DEVICE AND MULTI-SCREEN SYSTEM - A multi-screen signal processing device includes a main graphics processor and a plurality of sub-graphics processors. The main graphics processor is electrically connected to the plurality of sub-graphics processors respectively. The main graphics processor is used for receiving an external image data, and capable of decoding the external image data and outputting a frame data. Each sub-graphics processor respectively captures a part of the frame data synchronously and outputs a broadcasting signal. The multi-screen signal processing device may be connected to multiple screens to play multiple images at the same time. Moreover, the decoding step using a single graphics processor enables easy synchronization of frames displayed on different screens and saves energy consumed by repeating the decoding step. | 09-01-2011 |
20110261062 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus that includes a first graphic processing module having a first level of graphic performance and a second graphic processing module having a second level of graphic performance, which is greater than the first level of graphic performance. The information processing apparatus also includes a controller that selects one of the first graphic processing module or the second graphic processing module by determining whether the information processing apparatus is capable of outputting data with the first level of graphic performance or the second level of graphic performance, and detects whether the information processing apparatus is provided with power via a battery or via an external power source. | 10-27-2011 |
20120001926 | Intermediate Language Accelerator Chip - An accelerator chip can be positioned between a processor chip and a memory. The accelerator chip enhances the operation of a Java program by running portions of the Java program for the processor chip. In a preferred embodiment, the accelerator chip includes a hardware translator unit and a dedicated execution engine. | 01-05-2012 |
20120032965 | Intermediate language accelerator chip - An accelerator chip can be positioned between a processor chip and a memory: The accelerator chip enhances the operation of a Java program by running portions of the Java program for the processor chip. In a preferred embodiment, the accelerator chip includes a hardware translator unit and a dedicated execution engine. | 02-09-2012 |
20120188259 | Mechanisms for Enabling Task Scheduling - Embodiments described herein provide a method including receiving a command to schedule a first process and selecting a command queue associated with the first process. The method also includes scheduling the first process to run on an accelerated processing device and preempting a second process running on the accelerated processing device to allow the first process to run on the accelerated processing device. | 07-26-2012 |
20120194525 | Managed Task Scheduling on a Graphics Processing Device (APD) - Provided herein is a method including receiving a run list including one or more processes to run on an accelerated processing device, wherein each of the one or more processes is associated with a corresponding independent job command queue. The method also includes scheduling each of the one or more processes to run on the accelerated processing device based on a criteria associated with each process. | 08-02-2012 |
20120194526 | Task Scheduling - Systems, methods, and articles of manufacture for optimizing task scheduling on an accelerated processing device (APD) device are provided. In an embodiment, a method comprises: enqueuing, using the APD, one or more tasks in a memory storage; and dequeuing, using the APD, the one or more tasks from the memory storage using a hardware-based command processor, wherein the command processor forwards the one or more tasks to a shader core. | 08-02-2012 |
20120194527 | Method for Preempting Graphics Tasks to Accommodate Compute Tasks in an Accelerated Processing Device (APD) - Embodiments described herein provide a method of arbitrating a processing resource. The method includes receiving a command to preempt a task and preventing additional wavefronts associated with the task from being processed. The method also includes evicting currently executing wavefronts associated with the task from being processed based upon predetermined criteria | 08-02-2012 |
20120200579 | Process Device Context Switching - Methods, systems, and computer readable media embodiments are disclosed for preemptive context-switching of processes running on an accelerated processing device. A method includes, responsive to an exception upon access to a memory by a process running on a accelerated processing device, whether to preempt the process based on the exception, and preempting, based upon the determining, the process from running on the accelerated processing device. | 08-09-2012 |
20120206463 | Method and Apparatus for Dispatching Graphics Operations to Multiple Processing Resources - Parallel graphics-processing methods and mobile computing apparatus with parallel graphics-processing capabilities are disclosed. One exemplary embodiment of a mobile computing apparatus includes physical memory, at least two distinct graphics-processing devices, and a bus coupled to the physical memory and the at least two graphics-processing devices. A virtual graphics processing component enables each of at least two graphics-processing operations to be executed, in parallel, by a corresponding one of the at least two distinct graphics-processing devices, which operate in the same memory surface at the same time. | 08-16-2012 |
20120256929 | EXPANDABLE MULTI-CORE TELECOMMUNICATION AND VIDEO PROCESSING APPARATUS - An expandable multi-core telecommunication and video processing apparatus includes a primary wireless telecommunications device having a microprocessor that can be programed for running a wide range of software application and includes a primary, or main, viewer touch screen interface and a plurality of ports for receiving one or more video core processors. Each video core processor is removably connectable to a port located along a surface of the primary telecommunications device for permitting a plurality of individual videos which can be interfaced by a user. The individual videos displaced by each connected video core processor can act in concert with, or independently of, the main, or primary, touch screen interface which is located on a front surface of the primary telecommunications device. The primary telecommunications device further includes a detachable storage bay for retaining video core processors when not connected with the primary telecommunications device. The primary telecommunications device and the video core processors are, preferably, each connectable to a docking station, which can download data from either a video core processor or the primary telecommunications device. The docking station can be connected to a personal computer. | 10-11-2012 |
20130083042 | GPU SELF THROTTLING - Techniques for GPU self throttling are described. In one or more embodiments, timing information for GPU frame processing is obtained using a timeline for the GPU. This may occur by inserting callbacks into the GPU processing timeline. An elapsed time for unpredictable work that is inserted into the GPU workload is determined based on the obtained timing information. A decision is then made regarding whether to “throttle” designated optional/non-critical portions of the work for a frame based on the amount of elapsed time. In one approach the elapsed time is compared to a configurable timing threshold. If the elapsed time exceeds the threshold, work is throttled by performing light or no processing for one or more optional portions of a frame. If the elapsed time is less than the threshold, heavy processing (e.g., “normal” work) is performed for the frame. | 04-04-2013 |
20130147816 | Partitioning Resources of a Processor - Embodiments describe herein provide an apparatus, a computer readable medium and a method for simultaneously processing tasks within an APD. The method includes processing a first task within an APD. The method also includes reducing utilization of the APD by the first task to facilitate simultaneous processing of the second task, such that the utilization remains below a threshold. | 06-13-2013 |
20130181998 | PARA-VIRTUALIZED HIGH-PERFORMANCE COMPUTING AND GDI ACCELERATION - The present invention extends to methods, systems, and computer program products for para-virtualized GPGPU computation and GDI acceleration. Some embodiments provide a compute shader to a guest application within a para-virtualized environment. A vGPU in a child partition presents compute shader DDIs for performing GPGPU computations to a guest application. A render component in a root partition receives compute shader commands from the vGPU and schedules the commands for execution at the physical GPU. Other embodiments provide GPU-accelerated GDI rendering capabilities to a guest application within a para-virtualized environment. A vGPU in a child partition provides an API for receiving GDI commands, and sends GDI commands and data to a render component in a root partition. The render component schedules the GDI commands on a 3D rendering device. The 3D rendering device executes the GDI commands at the physical GPU using a sharable GDI surface. | 07-18-2013 |
20140002466 | METHOD AND DEVICE FOR EFFICIENT PARALLEL MESSAGE COMPUTATION FOR MAP INFERENCE | 01-02-2014 |
20140078155 | GRAPHICS ACCELERATOR - Disclosed herein are various embodiments of a graphics accelerator, which may include an integrated circuit. The integrated circuit may include a local memory; a direct memory access (DMA) engine; a processor; and one or more processing pipelines. The local memory stores graphics data that includes a plurality of pixels. The DMA engine transfers the graphics data between the local memory and an external memory. The processor performs at least one operation, in parallel, on components of at least a portion of the pixels. The one or more processing pipelines process the graphics data. The graphics accelerator works on operands and produces outputs for one set of pixels while the DMA engine is bringing in operands for a future set of pixel operations, and transfers data from the external memory to the one or more processing pipelines by directing data to the one or more pipelines. | 03-20-2014 |
20140184615 | Sequential Rendering For Field-Sequential Color Displays - The specification and drawings present a new method, apparatus and software related product (e.g., a computer readable memory) for sequential rendering (including hardware acceleration) each primary color of a plurality of primary colors in each frame of an image separately in a space-time domain for displaying on field-sequential color (FSC) displays. Instead of rendering whole pixels, various embodiments provide rendering of each primary color plane separately in the space-time domain, and serializing/sequencing the colors of the the rendered data directly to the bus that is connecting a host (an operator device) and the FSC display. Generally the number of primary colors may be two or more. When displayed on a FSC display, motion quality may be largely improved. | 07-03-2014 |
20140204099 | DIRECT LINK SYNCHRONIZATION COMMUNICATION BETWEEN CO-PROCESSORS - Systems, apparatus, articles, and methods are described including operations to communicate synchronization notifications between a co-processor graphic data producer and a co-processor graphic data consumer via a direct link without passing such communications through the central processing unit. | 07-24-2014 |
20140267316 | DISPLAY CO-PROCESSING - In embodiments of display co-processing, a computing device includes a display, a full-power processor, and a low-power processor that can alter visual content presented by the display without utilizing the full-power processor. The low-power processor can, responsive to a request from the full-power processor, generate additional display data to update display data stored in a frame-buffer of the display. The low-power processor can then transmit the additional display data to the frame-buffer effective to alter at least a portion of the visual content presented by the display. In some embodiments, the additional display data is transmitted via a protocol converter that forwards the display data to the display using a display-specific communication protocol. | 09-18-2014 |
20140267317 | MULTIMEDIA SYSTEM AND OPERATING METHOD OF THE SAME - A multimedia system includes a main special function register (SFR) configured to store SFR information; a plurality of processing modules each configured to process frames of data, based on the SFR information; and a system control logic configured to control operations of the main SFR and the plurality of processing modules. The plurality of processing modules may process data of different frames at the same time period. | 09-18-2014 |
20140313209 | SELECTIVE HARDWARE ACCELERATION IN VIDEO PLAYBACK SYSTEMS - Embodiments of a system and method for enhanced video performance in a video playback system are generally described herein. In some embodiments, a video frame from a video element in a web page, which is to be presented in a web browser and is unobscured by any other elements associated with the web page, the web browser, or a user interface, is directly rendered by a hardware decoder and composited with any associated web content or other elements directly to a video playback display device. When a video frame from the video element is obscured by another element the video frame is rendered by a processor in the video playback display device in order to incorporate the non-video graphics element on the video playback device. | 10-23-2014 |
20140333633 | APPARATUSES AND METHODS FOR POLICY AWARENESS IN HARDWARE ACCELERATED VIDEO SYSTEMS - Apparatuses and methods for prioritizing the allocation of video acceleration hardware in video playback systems. Smooth, high-definition video playback may be provided with a system that may include multiple web browsers, web browser tabs, video players, or media players by allocating hardware acceleration resources to for individual videos based on each video's priority in a predefined priority configuration. Priority of an individual video may be based on the visibility of the video to a system user or the order in which a video was requested by a user. Videos that are visible on a display screen may have a higher priority than videos that are hidden or obscured. Videos that are started or opened more recently than other videos may have a higher priority. A priority management unit may coordinate the allocation of video playback acceleration resources dynamically as the priority ranking of videos change in response to user input. | 11-13-2014 |
20140333634 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus and an image processing method are provided. Each of the image processing apparatus and the image processing method sets to a first setting data storage area an address enabling an access, for each one of sets of application software, to an address window in memory space accessible by a first processor, sets to a second setting data storage area an address of an address window in memory space of a second memory, for each one of the sets of application software, and transfers image data drawn on a first memory to the second memory via an address window specified by the application software, where the first setting data storage area and the second setting memory are included in a second processor and provided for each one of the sets of application software. | 11-13-2014 |
20150097843 | SYSTEMS AND METHODS FOR PROVIDING PRE-OPERATING SYSTEM AND POST-OPERATING SYSTEM REMOTE MANAGEMENT OF INFORMATION HANDLING SYSTEM - A method may include during a pre-operating system environment writing user graphics data to a discrete graphics controller and an embedded graphics controller of a service processor integral to the information handling system and storing user graphics data written to the embedded graphics controller in a frame buffer such that a remote management information handling system remotely coupled to the information handling system via the service processor may receive user graphics data from the frame buffer. The method may also include during a post-operating system environment establishing a remote management connection between the service processor and a host processor of the information handling system via an internal network, communicating datagrams from the host processor to the embedded processor, wherein the datagrams comprise a payload including post-operating system user graphics data, and communicating the post-operating system user graphics data from the service processor to the remote management information handling system. | 04-09-2015 |
20150116334 | SELECTIVE UTILIZATION OF GRAPHICS PROCESSING UNIT (GPU) BASED ACCELERATION IN DATABASE MANAGEMENT - A method for the selective utilization of graphics processing unit (GPU) acceleration of database queries in database management is provided. The method includes receiving a database query in a database management system executing in memory of a host computing system. The method also includes estimating a time to complete processing of one or more operations of the database query using GPU accelerated computing in a GPU and also a time to complete processing of the operations using central processor unit (CPU) sequential computing of a CPU. Finally, the method includes routing the operations for processing using GPU accelerated computing if the estimated time to complete processing of the operations using GPU accelerated computing is less than an estimated time to complete processing of the operations using CPU sequential computing, but otherwise routing the operations for processing using CPU sequential computing. | 04-30-2015 |
20150325033 | Efficient Inter-processor Communication in Ray Tracing - Novel method and system for distributed database ray-tracing is presented, based on modular mapping of scene-data among processors. Its inherent properties include matching between geographical proximity in the scene with communication proximity between processors. | 11-12-2015 |
20150348226 | SELECTIVE GPU THROTTLING - A method and apparatus of a device that manages a thermal profile of a device by selectively throttling graphics processing unit operations of the device is described. In an exemplary embodiment, the device monitors the thermal profile of the device, where the device executes a plurality of processes that utilizes a graphics processing unit of the device. In addition, the plurality of processes include a high priority process and a low priority process. If the thermal profile of the device exceeds a thermal threshold, the device decreases a first GPU utilization for the low priority process and maintains a second GPU utilization for the high priority process. The device further executes the low priority process using the first GPU utilization with the GPU and executes the high priority process using the second GPU utilization with the GPU. | 12-03-2015 |
20150348227 | DRAWING DATA GENERATION DEVICE, DRAWING DATA GENERATION METHOD AND DISPLAY DEVICE - At the time each of the components is to be drawn according to the drawing orders determined by a drawing-order determination unit | 12-03-2015 |
20150348228 | CLOSED LOOP CPU PERFORMANCE CONTROL - The invention provides a technique for targeted scaling of the voltage and/or frequency of a processor included in a computing device. One embodiment involves scaling the voltage/frequency of the processor based on the number of frames per second being input to a frame buffer in order to reduce or eliminate choppiness in animations shown on a display of the computing device. Another embodiment of the invention involves scaling the voltage/frequency of the processor based on a utilization rate of the GPU in order to reduce or eliminate any bottleneck caused by slow issuance of instructions from the CPU to the GPU. Yet another embodiment of the invention involves scaling the voltage/frequency of the CPU based on specific types of instructions being executed by the CPU. Further embodiments include scaling the voltage and/or frequency of a CPU when the CPU executes workloads that have characteristics of traditional desktop/laptop computer applications. | 12-03-2015 |
20160110841 | SCREEN PROVISION APPARATUS, SCREEN PROVISION SYSTEM, CONTROL METHOD AND STORAGE MEDIUM - A screen provision apparatus receives, from a client device, account identification information corresponding to the client device, renders a screen corresponding to that identification information, and transmits to the client device. Also, the screen provision apparatus, in a case where it determines that management information that is managed in association with the identification information satisfies a predetermined condition, obtains rendering grade information that defines the content of screen rendering processing. Then, the screen provision apparatus modifies the content of the screen rendering processing in accordance with the rendering grade information. | 04-21-2016 |
20160125566 | SYSTEM AND METHOD FOR PROCESSING LARGE-SCALE GRAPHS USING GPUs - The present invention relates to a system and method for processing a large scale graph using GPUs, and more particularly, to a system and method capable of processing larger-scale graph data beyond the capacity of device memory of GPUs using a streaming method. A large-scale graph processing system using GPUs according to an aspect of the present invention includes a main memory, device memories of a plurality of GPUs that process graph data transferred from the main memory; a loop controller that processes graph data transfer in a nested loop join scheme in the graph data transfer between the main memory and the device memory of the GPU, and a streaming controller that copies the graph data to the device memory of the GPU in a chunk or streaming manner using a GPU stream according to the nested loop join scheme. | 05-05-2016 |
20160180491 | DISPLAY SYSTEM HAVING TWO SYSTEMS WHICH OPERATE ONE AT A TIME | 06-23-2016 |
20160253775 | METHODS AND SYSTEMS FOR DESIGNING CORRELATION FILTER | 09-01-2016 |