Cuadra
Allen C. Cuadra, Miami, FL US
Patent application number | Description | Published |
---|---|---|
20140209723 | Comminuting apparatus - A device for comminuting media materials that store sensitive information. The device is a rotating mill core with removable flat edged blades, a set of stacked bed knives and a screen. The mill core rotates in close proximity to the bed knives to shear the material being fed, and in close proximity to the screen in order to grate the material. A flywheel is attached to the mill core in order to conserve rotational energy and improve the efficiency of the device. A knurled roller is installed in the device that feeds the material into the knife mill as well as holds the material steady during the initial shear. Vacuum ports facilitate cleanup. | 07-31-2014 |
Daniel E. Cuadra, Los Angeles, CA US
Patent application number | Description | Published |
---|---|---|
20150213993 | CONTINUOUS CONTACT X-RAY SOURCE - An x-ray device utilizes a band of material to exchange charge through tribocharging within a chamber maintained at low fluid pressure. The charge is utilized to generate x-rays within the housing, which may pass through a window of the housing. Various contact rods may be used as part of the tribocharging process. | 07-30-2015 |
Daniel E. Cuadra, Marina Del Rey, CA US
Patent application number | Description | Published |
---|---|---|
20140270085 | CONTINUOUS CONTACT X-RAY SOURCE - An x-ray device utilizes a band of material to exchange charge through tribocharging within a chamber maintained at low fluid pressure. The charge is utilized to generate x-rays within the housing, which may pass through a window of the housing. Various contact rods may be used as part of the tribocharging process. | 09-18-2014 |
Dean Arthur Cuadra, Santa Monica, CA US
Patent application number | Description | Published |
---|---|---|
20100302638 | Repositionable lens cover - A selectively positionable lens cover is disclosed comprising a lens housing volume and comprising an arcuate lip portion for the selective engagement to and disengagement from a lens. The lens cover further comprises a first engagement structure whereby the first engagement structure may be releaseably engaged with and removed from a second engagement structure. | 12-02-2010 |
Jason Cuadra, San Jose, CA US
Patent application number | Description | Published |
---|---|---|
20120223799 | TRANSVERSE SHROUD AND BOBBIN ASSEMBLY - A transformer assembly includes a vertical bobbin and a shrouding element. The vertical bobbin includes a first winding portion, a second winding portion, and a flange. The flange is disposed between the first winding portion and the second winding portion. The flange includes a flange edge. The shrouding element, which substantially covers the first winding portion or the second winding portion, includes a shrouding edge that is operatively coupled to the flange edge. The flange edge and the shrouding edge have at least one complementary corrugation. In one example, the flange edge includes at least one groove and the shrouding edge includes at least one protrusion that is complementary to the groove. In another example, the shrouding edge includes at least one groove and the flange edge includes at least one protrusion that is complementary to the groove. | 09-06-2012 |
Philip Cuadra, San Francisco, CA US
Patent application number | Description | Published |
---|---|---|
20130162661 | SYSTEM AND METHOD FOR LONG RUNNING COMPUTE USING BUFFERS AS TIMESLICES - A system and method for using command buffers as timeslices or periods of execution for a long running compute task on a graphics processor. Embodiments of the present invention allow execution of long running compute applications with operating systems that manage and schedule graphics processing unit (GPU) resources and that may have a predetermined execution time limit for each command buffer. The method includes receiving a request from an application and determining a plurality of command buffers required to execute the request. Each of the plurality of command buffers may correspond to some portion of execution time or timeslice. The method further includes sending the plurality of command buffers to an operating system operable for scheduling the plurality of command buffers for execution on a graphics processor. The command buffers from a different request are time multiplexed within the execution of the plurality of command buffers on the graphics processor. | 06-27-2013 |
20140259016 | SYSTEM AND METHOD FOR RUNTIME SCHEDULING OF GPU TASKS - A method for scheduling work for processing by a GPU is disclosed. The method includes accessing a work completion data structure and accessing a work tracking data structure. Dependency logic analysis is then performed using work completion data and work tracking data. Work items that have dependencies are then launched into the GPU by using a software work item launch interface. | 09-11-2014 |
Philip Alexander Cuadra, Mountain View, CA US
Patent application number | Description | Published |
---|---|---|
20130117758 | COMPUTE WORK DISTRIBUTION REFERENCE COUNTERS - One embodiment of the present invention sets forth a technique for managing the allocation and release of resources during multi-threaded program execution. Programmable reference counters are initialized to values that limit the amount of resources for allocation to tasks that share the same reference counter. Resource parameters are specified for each task to define the amount of resources allocated for consumption by each array of execution threads that is launched to execute the task. The resource parameters also specify the behavior of the array for acquiring and releasing resources. Finally, during execution of each thread in the array, an exit instruction may be configured to override the release of the resources that were allocated to the array. The resources may then be retained for use by a child task that is generated during execution of a thread. | 05-09-2013 |
20130117760 | Software-Assisted Instruction Level Execution Preemption - One embodiment of the present invention sets forth a technique for instruction level execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. Any in-flight instructions that follow the preemption command in the processing pipeline are captured and stored in a processing task buffer to be reissued when the preempted program is resumed. The processing task buffer is designated as a high priority task to ensure the preempted instructions are reissued before any new instructions for the preempted context when execution of the preempted context is restored. | 05-09-2013 |
20130124838 | INSTRUCTION LEVEL EXECUTION PREEMPTION - One embodiment of the present invention sets forth a technique instruction level and compute thread array granularity execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. When preemption is performed at a compute thread array boundary, the amount of context state to be stored is reduced because execution units within the processing pipeline complete execution of in-flight instructions and become idle. If, the amount of time needed to complete execution of the in-flight instructions exceeds a threshold, then the preemption may dynamically change to be performed at the instruction level instead of at compute thread array granularity. | 05-16-2013 |
Philip Alexander Cuadra, San Francisco, CA US
Patent application number | Description | Published |
---|---|---|
20130160021 | SIGNALING, ORDERING, AND EXECUTION OF DYNAMICALLY GENERATED TASKS IN A PROCESSING SYSTEM - One embodiment of the present invention sets forth a technique for enabling the insertion of generated tasks into a scheduling pipeline of a multiple processor system allows a compute task that is being executed to dynamically generate a dynamic task and notify a scheduling unit of the multiple processor system without intervention by a CPU. A reflected notification signal is generated in response to a write request when data for the dynamic task is written to a queue. Additional reflected notification signals are generated for other events that occur during execution of a compute task, e.g., to invalidate cache entries storing data for the compute task and to enable scheduling of another compute task. | 06-20-2013 |
20130187935 | LOW LATENCY CONCURRENT COMPUTATION - One embodiment of the present invention sets forth a technique for performing low latency computation on a parallel processing subsystem. A low latency functional node is exposed to an operating system. The low latency functional node and a generic functional node are configured to target the same underlying processor resource within the parallel processing subsystem. The operating system stores low latency tasks generated by a user application within a low latency command buffer associated with the low latency functional node. The parallel processing subsystem advantageously executes tasks from the low latency command buffer prior to completing execution of tasks in the generic command buffer, thereby reducing completion latency for the low latency tasks. | 07-25-2013 |
20130198760 | AUTOMATIC DEPENDENT TASK LAUNCH - One embodiment of the present invention sets forth a technique for automatic launching of a dependent task when execution of a first task completes. Automatically launching the dependent task reduces the latency incurred during the transition from the first task to the dependent task. Information associated with the dependent task is encoded as part of the metadata for the first task. When execution of the first task completes a task scheduling unit is notified and the dependent task is launched without requiring any release or acquisition of a semaphore. The information associated with the dependent task includes an enable flag and a pointer to the dependent task. Once the dependent task is launched, the first task is marked as complete so that memory storing the metadata for the first task may be reused to store metadata for a new task. | 08-01-2013 |
20130268942 | METHODS AND APPARATUS FOR AUTO-THROTTLING ENCAPSULATED COMPUTE TASKS - Systems and methods for auto-throttling encapsulated compute tasks. A device driver may configure a parallel processor to execute compute tasks in a number of discrete throttled modes. The device driver may also allocate memory to a plurality of different processing units in a non-throttled mode. The device driver may also allocate memory to a subset of the plurality of processing units in each of the throttling modes. Data structures defined for each task include a flag that instructs the processing unit whether the task may be executed in the non-throttled mode or in the throttled mode. A work distribution unit monitors each of the tasks scheduled to run on the plurality of processing units and determines whether the processor should be configured to run in the throttled mode or in the non-throttled mode. | 10-10-2013 |
20130298133 | TECHNIQUE FOR COMPUTATIONAL NESTED PARALLELISM - One embodiment of the present invention sets forth a technique for performing nested kernel execution within a parallel processing subsystem. The technique involves enabling a parent thread to launch a nested child grid on the parallel processing subsystem, and enabling the parent thread to perform a thread synchronization barrier on the child grid for proper execution semantics between the parent thread and the child grid. This technique advantageously enables the parallel processing subsystem to perform a richer set of programming constructs, such as conditionally executed and nested operations and externally defined library functions without the additional complexity of CPU involvement. | 11-07-2013 |
20140165072 | TECHNIQUE FOR SAVING AND RESTORING THREAD GROUP OPERATING STATE - A streaming multiprocessor (SM) included within a parallel processing unit (PPU) is configured to suspend a thread group executing on the SM and to save the operating state of the suspended thread group. A load-store unit (LSU) within the SM re-maps local memory associated with the thread group to a location in global memory. Subsequently, the SM may re-launch the suspended thread group. The LSU may then perform local memory access operations on behalf of the re-launched thread group with the re-mapped local memory that resides in global memory. | 06-12-2014 |
20140189329 | COOPERATIVE THREAD ARRAY GRANULARITY CONTEXT SWITCH DURING TRAP HANDLING - Techniques are provided for handling a trap encountered in a thread that is part of a thread array that is being executed in a plurality of execution units. In these techniques, a data structure with an identifier associated with the thread is updated to indicate that the trap occurred during the execution of the thread array. Also in these techniques, the execution units execute a trap handling routine that includes a context switch. The execution units perform this context switch for at least one of the execution units as part of the trap handling routine while allowing the remaining execution units to exit the trap handling routine before the context switch. One advantage of the disclosed techniques is that the trap handling routine operates efficiently in parallel processors. | 07-03-2014 |
20140337569 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR LOW LATENCY SCHEDULING AND LAUNCH OF MEMORY DEFINED TASKS - A system, method, and computer program product for low-latency scheduling and launch of memory defined tasks. The method includes the steps of receiving a task metadata data structure to be stored in a memory associated with a processor, transmitting the task metadata data structure to a scheduling unit of the processor, storing the task metadata data structure in a cache unit included in the scheduling unit, and copying the task metadata data structure from the cache unit to the memory. | 11-13-2014 |
Phillip Alexander Cuadra, San Francisco, CA US
Patent application number | Description | Published |
---|---|---|
20140189711 | COOPERATIVE THREAD ARRAY GRANULARITY CONTEXT SWITCH DURING TRAP HANDLING - Techniques are provided for restoring thread groups in a cooperative thread array (CTA) within a processing core. Each thread group in the CTA is launched to execute a context restore routine. Each thread group, executes the context restore routine to restore from a memory a first portion of context associated with the thread group, and determines whether the thread group completed an assigned function prior to executing the context restore routine. If the thread group completed an assigned function prior to executing the context restore routine, then the thread group exits the context restore routine. If the thread group did not complete the assigned function prior to executing the context restore routine, then the thread group executes one or more operations associated with a trap handler routine. One advantage of the disclosed techniques is that the trap handling routine operates efficiently in parallel processors. | 07-03-2014 |
Phlip Alexander Cuadra, Mountain View, CA US
Patent application number | Description | Published |
---|---|---|
20130152094 | ERROR CHECKING IN OUT-OF-ORDER TASK SCHEDULING - One embodiment of the present invention sets forth a technique for error-checking a compute task. The technique involves receiving a pointer to a compute task, storing the pointer in a scheduling queue, determining that the compute task should be executed, retrieving the pointer from the scheduling queue, determining via an error-check procedure that the compute task is eligible for execution, and executing the compute task. | 06-13-2013 |
Robert A. Cuadra, Los Angeles, CA US
Patent application number | Description | Published |
---|---|---|
20150121423 | VIEWER-AUTHORED CONTENT ACQUISITION AND MANAGEMENT SYSTEM FOR IN-THE-MOMENT BROADCAST IN CONJUNCTION WITH MEDIA PROGRAMS - A method, apparatus, and system for providing viewer-derived content for broadcast presentation in conjunction with a broadcast of a media program by a provider of the media program is disclosed. The disclosed system and method (1) simplifies the process for viewers to provide viewer-authored media to broadcasters, while minimizing the data transmission requirements between portable viewer devices and the broadcaster, (2) allows advance approval for the broadcasters to use that viewer-generated content to generate and disseminate viewer-authored-content and (3) provides for management of viewer-generated content (4) integrates with social networks that can be used to at least preliminarily assess the popularity and suitability of the viewer-generated content for broadcast to other viewers. | 04-30-2015 |