Patent application number | Description | Published |
20090128143 | Systems and Methods for RF Magnetic-Field Vector Detection Based on Spin Rectification Effects - Systems and methods for RF magnetic-field vector detection based on spin rectification effects are described. In one embodiment, a method comprises sweeping a quasi-static external applied magnetic field at a h-vector detector, measuring voltages across terminals of the h-vector detector when the detector receives a microwave, varying the angle between the external applied static magnetic field and the RF current, determining an angular dependence of the measured voltages, and calculating a magnetic-field vector (h-vector) component of the microwave. In another embodiment, a method comprises providing an array of h-vector detectors, each element of the array being positioned at a different angle with respect to each other, subjecting the array to an external swept quasi-static magnetic field, measuring voltages across terminals of each element of the array when the array receives a microwave, associating each measured voltage with a respective angle, and calculating at least one h-vector component of the microwave. | 05-21-2009 |
20120001656 | Apparatus, System, and Method for Direct Phase Probing and Mapping of Electromagnetic Signals - An apparatus, system, and method for phase detection of electromagnetic signals are presented. The apparatus may include a magnetic element, one or more first signal contacts coupled to the magnetic element for receiving a first signal, and one or more output contacts coupled to the magnetic element for providing a variable level voltage generated by the magnetic element, the level of the voltage being responsive to a phase difference between the first signal and a second signal. In a further embodiment, the apparatus may include a substrate for mechanically supporting the magnetic element. Additionally, the apparatus may include a conductor mechanically supported by substrate, the conductor configured to receive the second signal. | 01-05-2012 |
20150221847 | SEEBECK RECTIFICATION ENABLED BY INTRINSIC THERMOELECTRIC COUPLING IN MAGNETIC TUNNELING JUNCTIONS - Embodiments of intrinsic magneto-thermoelectric transport in MTJs carrying a tunneling current/in the absence of external heat sources are presented. In one embodiment Ohm's law for describing MTJs may be revised even in the linear transport regime. This has a profound impact on the dynamic response of MTJs subject to an ac electric bias with frequency ω, as demonstrated by a novel Seebeck rectification effect measured for ω up to microwave (GHz) frequencies. This Seebeck rectification effect may be employed in magneto-thermoelectric devices. | 08-06-2015 |
Patent application number | Description | Published |
20110081890 | SYSTEM AND METHOD PROVIDING INTEROPERABILITY BETWEEN CELLULAR AND OTHER WIRELESS SYSTEMS - A method and corresponding apparatus for providing a cellular subscriber with access to a WLAN are provided. They involve identifying a multimode mobile terminal, which corresponds to the subscriber and the WLAN from an access request. Based on the identification, the WLAN is authorized to provide the mobile terminal with access. The mobile terminal is then provided with access to the WLAN as a cellular subscriber and enables interoperability between the two networks. For example, the subscriber does not have to supply a credit card to pay for WLAN access directly. Instead, the subscriber pays a cellular network provider, and, in turn, the cellular network provider pays a WLAN provider for the access. | 04-07-2011 |
20140092891 | SYSTEM AND METHOD PROVIDING INTEROPERABILITY BETWEEN CELLULAR AND OTHER WIRELESS SYSTEMS - A method and corresponding apparatus for providing a cellular subscriber with access to a WLAN are provided. They involve identifying a multimode mobile terminal, which corresponds to the subscriber and the WLAN from an access request. Based on the identification, the WLAN is authorized to provide the mobile terminal with access. The mobile terminal is then provided with access to the WLAN as a cellular subscriber and enables interoperability between the two networks. For example, the subscriber does not have to supply a credit card to pay for WLAN access directly. Instead, the subscriber pays a cellular network provider, and, in turn, the cellular network provider pays a WLAN provider for the access. | 04-03-2014 |
20140204814 | POWER SAVING IN WIRELESS NETWORK ENTITIES - Power saving in wireless networks is disclosed. A wireless network entity that includes a module to enable a reduction in power consumption in that wireless network entity is also disclosed. The module is configured to determine that a selected wireless station of one or more wireless stations associated with the wireless network entity in a same wireless network will transmit system control information (including synchronization information and service identification information) that is normally transmitted by the wireless network entity. | 07-24-2014 |
20150078239 | POWER SAVING IN WIRELESS NETWORK ENTITIES - Power saving in wireless networks is disclosed. A wireless network entity that includes a module to enable a reduction in power consumption in that wireless network entity is also disclosed. The module is configured to determine that a selected wireless station of one or more wireless stations associated with the wireless network entity in a same wireless network will transmit system control information (including synchronization information and service identification information) that is normally transmitted by the wireless network entity. | 03-19-2015 |
20160037531 | RELAY SYSTEMS AND METHODS FOR WIRELESS NETWORKS - In one embodiment, a method is performed by a wireless station. The method includes determining that a wireless network provides relay service. The wireless network includes an access point and one or more relay nodes. The method further includes transmitting a relay-service desirability indication to the access point. The method also includes receiving a relay-service confirmation from the access point. The wireless station is operable to transmit at a first station-transmission power level during a first time period and a second station-transmission power level during a second time period. The second station-transmission power level is a reduced station-transmission power level as compared to the first station-transmission power level. In addition, the method includes transmitting an uplink transmission at the second station-transmission power level responsive to the relay-service confirmation from the access point. | 02-04-2016 |
Patent application number | Description | Published |
20090064119 | Systems, Methods, And Computer Products For Compiler Support For Aggressive Safe Load Speculation - Systems, methods and computer products for compiler support for aggressive safe load speculation. Exemplary embodiments include a method for aggressive safe load speculation for a compiler in a computer system, the method including building a control flow graph, identifying both countable and non-countable loops, gathering a set of candidate loops for load speculation, for each candidate loop in the set of candidate loops gathered for load speculation performing computing an estimate of the iteration count, delay cycles, and code size, performing a profitability analysis and determine an unroll factor based on the delay cycles and the code size, transforming the loop by generating a prologue loop to achieve data alignment and an unrolled main loop with loop directives, indicating which loads can safely be executed speculatively and performing low-level instruction on the generated unrolled main loop. | 03-05-2009 |
20130125105 | UNIFIED PARALLEL C WORK-SHARING LOOP CONSTRUCT TRANSFORMATION - Control flow information and data flow information associated with a program containing a upc_forall loop are built. A shared reference map data structure using the control flow information and the data flow information is created. All local shared accesses are hashed to facilitate a constant access stride after being rewritten. All local shared references in a hash entry having a longest list are privatized. The upc_forall loop is rewritten into a for loop. Responsive to a determination that an unprocessed upc_forall loop does not exist, dead store elimination is run. The control flow information and the data flow information associated with the program containing the for loop is rebuilt. | 05-16-2013 |
Patent application number | Description | Published |
20110023817 | Variable-coordination-timing type self-cooling engine with variable-profile-camshaft - The present invention provides a variable-coordination-timing type self-cooling engine capable of adjusting the initiation timings and the injected amount of each injection process according to the change in the combusting pressure with the variable-coordinate-timing system. | 02-03-2011 |
20110251743 | Mackay cold-expansion engine system - The present invention provides an integrated engine system; said integrated engine system includes an air-compression means, an air-buffer-system, a power-management-unit, and at least two cold-expansion-chambers; wherein each of said at least two cold-expansion-chamber includes a spark-ignition means, a fuel-supplying means, a cold-air-injection means, and a reenergize-air-injection means; each cold-expansion-chamber operates in a Mackay Cold-Expansion Cycle, which includes a first-intake-process, a hot-combustion-process, a fuel-cooling-process, a second-intake-process, a cold-expansion-process, and an active-exhaust-process; wherein the fuel-cooling-process may be disabled according to the operation condition. | 10-13-2011 |
20110303191 | Low-cost type mackay four-stroke engine system - The present invention provides an integrated engine system, which includes an air-compression means, an air-buffer-system, a fuel-supplying means, an power-management-unit, and at least two cold-expansion-chambers; wherein, each of said at least two cold-expansion-chambers operates with a Simplified Mackay Four-Stroke Cycle, which includes a first-intake-process, a compression-process, a hot-combustion-process, a second-intake-process, a cold-expansion-process, and an exhaust-process. | 12-15-2011 |
20120090580 | Controlled-compression direct-power-cycle engine - The present invention provides a controlled-compression direct-power-cycle engine for performing the direct-power-cycle, wherein the air is compressed with three compression processes and cooled to a controlled temperature before ignition, the engine power output is controlled by both the compressor-transmission and the servo-intake-valve; the three compression processes are the initial-compression-process, the intermediate-compression-process, the final-compression-process, wherein, the initial-compression-process is performed by the turbocharger, the intermediate-compressor-process is performed by a screw type compressor, a rotary type compressor, or a scroll type intermediate-compressor, the final-compression-process is performed by the pistons of the combustion chambers; said intermediate-compressor is coupled to the compressor-transmission for adjusting the compression-capacity according to the instruction signals from the engine control unit, which computes the required compression-capacity by the user's power demand and the pressure in the cooling tank; said final-compression-process adjusts the actual-pressure-ratio with the actuation-time of the servo-intake-valve; said servo-intake-valve is opened for 5-60 degree of crankshaft rotation and is shut at a point between 90 degree BTC and 10 degree BTC according to instruction signals from the engine control unit; wherein the compressor-transmission is set to provide a higher airflow and said servo-intake-valve is shut at an earlier crankshaft reference angle to increase the actual-pressure-ratio of the final-compression-process for operating the direct-power-cycle at a high power output, whereas the compressor-transmission is set to provide a lower airflow and said servo-intake-valve is shut at a later crankshaft reference angle to decrease the actual-pressure-ratio of the final-compression-process for operating the direct-power-cycle at a lower power output. | 04-19-2012 |
20130139769 | Mackay Tri-expansion cycle engine utilizing an eight-stroke master cylinder and an eight-stroke slave cylinder - The present invention provides a Mackay tri-expansion cycle engine which operates with an eight-stroke master cylinder and an eight-stroke slave cylinder; the Mackay tri-expansion cycle engine intakes air and fuel into the eight-stroke master cylinder, and the air-fuel-mixture combusts in three expansion processes; the first expansion process generates power at high temperature with a hot-combustion-medium of high CO concentration; the second expansion process generates power with a cold-expansion-medium mixing from said hot-combustion-medium and a compressed air, spontaneously converting all CO content into CO | 06-06-2013 |
Patent application number | Description | Published |
20080220712 | AIRFLOW BOOSTING ASSEMBLY FOR A FORCED AIR CIRCULATION AND DELIVERY SYSTEM - The invention relates generally to the field of airflow boosting devices. In particular, the invention relates to a booster fan for installation into a vent opening of a duct system in a forced air circulation and delivery system. In an embodiment, the booster fan includes a register plate for covering a vent opening An opening or openings on the register plate provide an air outlet. A housing is secured to the register plate for enclosing a crossflow fan therein. The crossflow fan is disposed adjacent and spaced from the register plate and resiliently supported at both ends. A motor is resiliently connected to the crossflow fan. The housing also has an aperture for providing an air inlet communicating with the duct system. Preferably, two arcurate air deflection panels are provided in the housing and connecting the air inlet and air outlet to form a guided air passageway. | 09-11-2008 |
20100015905 | Airflow boosting assembly for a forced air circulation and delivery system - The invention relates generally to the field of airflow boosting devices. In particular, the invention relates to a booster fan for installation into a vent opening of a duct system in a forced air circulation and delivery system. In an embodiment, the booster fan includes a register plate for covering a vent opening. An opening or openings on the register plate provide an air outlet. A housing is secured to the register plate for enclosing a crossflow fan therein. The crossflow fan is disposed adjacent and spaced from the register plate and resiliently supported at both ends. A motor is resiliently connected to the crossflow fan. The housing also has an aperture for providing an air inlet communicating with the duct system. Preferably, two arcuate air deflection panels are provided in the housing for connecting the air inlet and air outlet to form a guided air passageway. | 01-21-2010 |
20150024675 | Airflow Boosting Assembly for a Forced Air Circulation and Delivery System - The invention relates generally to the field of airflow boosting devices. In particular, the invention relates to a booster fan for installation into a vent opening of a duct system in a forced air circulation and delivery system. In an embodiment, the booster fan includes a register plate for covering a vent opening. An opening or openings on the register plate provide an air outlet. A housing is secured to the register plate for enclosing a crossflow fan therein. The crossflow fan is disposed adjacent and spaced from the register plate and resiliently supported at both ends. A motor is resiliently connected to the crossflow fan. The housing also has an aperture for providing an air inlet communicating with the duct system. Preferably, two arcuate air deflection panels are provided in the housing for connecting the air inlet and air outlet to form a guided air passageway. | 01-22-2015 |
Patent application number | Description | Published |
20130147012 | CIRCUIT BOARD COMPONENT SHIM STRUCTURE - Various circuit boards and methods of fabricating the same are disclosed. In one aspect, a method of manufacturing is provided that includes coupling an electrically non-functional component to a surface of a first circuit board. The electrically non-functional component has a first elevation. The surface of the circuit board is adapted to have a semiconductor chip mounted thereon. An electrically functional component is mounted to the surface inward from the electrically non-functional component. The electrically functional component has a second elevation less than the first elevation. | 06-13-2013 |
20130343000 | THERMAL MANAGEMENT CIRCUIT BOARD FOR STACKED SEMICONDUCTOR CHIP DEVICE - A method of assembling a semiconductor chip device is provided. The method includes providing a first circuit board that has a plurality of thermally conductive vias. A second circuit board is mounted on the first circuit board over and in thermal contact with the thermally conductive vias. The second circuit board includes first side facing the first circuit board and a second and opposite side. | 12-26-2013 |
20150049441 | CIRCUIT BOARD WITH CORNER HOLLOWS - A method of manufacturing is provided that includes singulating a circuit board from a substrate of plural of the circuit boards, wherein the circuit board is shaped to have four corner hollows. The corner hollows may be various shapes. | 02-19-2015 |
20150279794 | SEMICONDUCTOR CHIP WITH PATTERNED UNDERBUMP METALLIZATION AND POLYMER FILM - Various semiconductor chip solder bump and underbump metallization (UBM) structures and methods of making the same are disclosed. In one aspect, a method is provided that includes forming a first underbump metallization layer on a semiconductor chip is provided. The first underbump metallization layer has a hub, a first portion extending laterally from the hub, and a spoke connecting the hub to the first portion. A polymer layer is applied to the first underbump metallization layer. The polymer layer includes a first opening in alignment with the hub and a second opening in alignment with the spoke. A portion of the spoke is removed via the second opening to sever the connection between the hub and the first portion. | 10-01-2015 |
20160073493 | STIFFENER RING FOR CIRCUIT BOARD - Various stiffener rings and circuit boards are disclosed. In one aspect, an apparatus is provided that includes a stiffener ring that has a first flange to engage a first principal side of a circuit board and a peripheral wall to engage an external peripheral wall of the circuit board. | 03-10-2016 |
Patent application number | Description | Published |
20100286387 | CRYSTALLINE SULPHATED CELLULOSE II AND ITS PRODUCTION FROM SULPHURIC ACID HYDROLYSIS OF CELLULOSE - A method for producing crystalline sulphated cellulose II materials with relatively low degree of polymerization from spent liquors of sulphuric acid (H | 11-11-2010 |
20140288296 | CELLULOSE FILMS WITH AT LEAST ONE HYDROPHOBIC OR LESS HYDROPHILIC SURFACE - A method for the production of cellulose films with at least one hydrophobic or less hydrophilic surface, or with at least one surface with a water contact angle (θ) in a range from 55° to less than 100° is described. The method involves contacting the cellulose material with a hydrophobic solid material during the preparation of the cellulose films or with a vapour of a non-polar or polar aprotic solvent during or after the preparation of the cellulose films. Examples of the cellulose material are cellulose filaments (CF) made to have at least 50% by weight of the filaments having a filament length up to 350 μm and a filament diameter between 100 and 500 nm from multi-pass, high consistency refining of wood or plant fibers, and commercially-available sodium carboxymethyl cellulose. Examples of the hydrophobic solid material are hydrophobic polymers, poly(methylpentene) and poly(ethylene). Examples of the non-polar solvent are hexane and toluene. Examples of the polar aprotic solvent are acetone and ethyl acetate. | 09-25-2014 |
20150275433 | DRY CELLULOSE FILAMENTS AND THE METHOD OF MAKING THE SAME - The present invention relates to dry cellulose filaments and particularly those that are re-dispersible in water. Dry cellulose filaments comprise at least 50% by weight of the filaments having a filament length up to 350 μm; and a diameter of between 100 and 500 nm, wherein the filaments are re-dispersible in water. Also described here is a film of dry cellulose filaments comprising the filaments described, wherein the film is dispersible in water. A method of making a dry film of cellulose filaments is also described that includes providing a liquid suspension of the cellulose filaments described; and retaining the filaments on the forming section of a paper or tissue making machine or on a modified paper or tissue making machine. The film can be optionally converted to powders or flakes for shipment, storage or subsequent uses. The filaments, the film, the powders or flakes and the method are in a preferred embodiment free of additives and the derivatization of the filaments. | 10-01-2015 |
Patent application number | Description | Published |
20140250896 | COMBUSTOR HEAT SHIELD WITH CARBON AVOIDANCE FEATURE - The build-up of carbon deposition on the front face of a combustor heat shield is discouraged by jetting air out from the front face of the heat shield with sufficient momentum to push approaching fuel droplets or rich fuel-air mixture way from the heat shield. | 09-11-2014 |
20140260266 | COMBUSTOR FOR GAS TURBINE ENGINE - A gas turbine engine comprises an annular combustor chamber formed between an inner liner and an outer liner. An annular upstream zone is adapted to receive fuel and air from an annular nozzle. An annular mixing zone is located downstream of the upstream zone. The mixing zone has a reduced radial height relative a downstream combustion zone of the combustion chamber, the mixing zone defined by straight wall sections. | 09-18-2014 |
20140260298 | COMBUSTOR FOR GAS TURBINE ENGINE - A combustor comprises an annular combustor chamber formed between the inner and outer liners. Fuel nozzles each have an end in fluid communication with the annular combustor chamber to inject fuel in the annular combustor chamber, the fuel nozzles oriented to inject fuel in a fuel flow direction having an axial component relative to the central axis of the annular combustor chamber. A plurality of nozzle air holes are defined through the inner liner and the outer liner adjacent to and downstream of the fuel nozzles. The nozzle air holes are configured for high pressure air to be injected from an exterior of the liners through the nozzle air holes generally radially into the annular combustor chamber. A central axis of the nozzle air holes has a tangential component relative to the central axis of the annular combustor chamber. | 09-18-2014 |
20150113994 | COMBUSTOR FOR GAS TURBINE ENGINE - In a gas turbine combustor having an inner and outer liner defining an annular combustion chamber, at least an annular scoop ring provided on each inner and outer combustor liner. The annular scoop ring includes a solid radial inner base provided with bores defined therein and communicating with the combustion chamber to form air dilution inlets. The scoop ring has a radial outer portion in the form of a C-shaped scoop open to receive high velocity annular air flow. The bores of the inlets communicating with the scoop portion to direct the air flow into the combustion chamber whereby the bores of the inlets form jet nozzles to generate air jet penetration and direction within the combustion chamber. | 04-30-2015 |
20150338102 | COMBUSTOR FOR GAS TURBINE ENGINE - A combustor comprises an annular combustor chamber formed between the inner and outer liners. Fuel nozzles each have an end in fluid communication with the annular combustor chamber to inject fuel in the annular combustor chamber, the fuel nozzles oriented to inject fuel in a fuel flow direction having an axial component relative to the central axis of the annular combustor chamber. A plurality of nozzle air holes are defined through the inner liner and the outer liner adjacent to and downstream of the fuel nozzles. The nozzle air holes are configured for high pressure air to be injected from an exterior of the liners through the nozzle air holes generally radially into the annular combustor chamber. A central axis of the nozzle air holes has a tangential component relative to the central axis of the annular combustor chamber. | 11-26-2015 |
Patent application number | Description | Published |
20100248785 | FEEDER CABLE REDUCTION - The present invention allows transmission of multiple signals between masthead electronics and base housing electronics in a base station environment. At least some of the received signals from the multiple antennas are translated to being centered about different center frequencies, such that the translated signals may be combined into a composite signal including each of the received signals. The composite signal is then sent over a single feeder cable to base housing electronics, wherein the received signals are separated and processed by transceiver circuitry. Prior to being provided to the transceiver circuitry, those signals that were translated from being centered about one frequency to another may be retranslated to being centered about the original center frequency. | 09-30-2010 |
20120058720 | FEEDER CABLE REDUCTION - The present invention allows transmission of multiple signals between masthead electronics and base housing electronics in a base station environment. At least some of the received signals from the multiple antennas are translated to being centered about different center frequencies, such that the translated signals may be combined into a composite signal including each of the received signals. The composite signal is then sent over a single feeder cable to base housing electronics, wherein the received signals are separated and processed by transceiver circuitry. Prior to being provided to the transceiver circuitry, those signals that were translated from being centered about one frequency to another may be retranslated to being centered about the original center frequency. | 03-08-2012 |
20130143623 | FEEDER CABLE REDUCTION - The present disclosure allows transmission of multiple signals between masthead electronics and base housing electronics in a base station environment. At least some of the received signals from the multiple antennas are translated to being centered about different center frequencies, such that the translated signals may be combined into a composite signal including each of the received signals. The composite signal is then sent over a single feeder cable to base housing electronics, wherein the received signals are separated and processed by transceiver circuitry. Prior to being provided to the transceiver circuitry, those signals that were translated from being centered about one frequency to another may be retranslated to being centered about the original center frequency. | 06-06-2013 |
Patent application number | Description | Published |
20080256238 | Method and system for utilizing a resource conductor to optimize resource management in a distributed computing environment - Disclosed herein are embodiments of a method and system for optimizing resource management in a distributed computing environment through the use of a resource conductor. An application managed by an application manager requires resources managed by a resource manager. A resource conductor in communication with both the application manager and the resource manager receives from the application manager a processing specification for the application and workload associated with the application. The processing specification provides the resource conductor with information needed to determine the type and quantity of resources appropriate for processing the workload associated with the application. The resource conductor adjusts the quantity of resources allocated to the application by communicating with the resource manager. | 10-16-2008 |
20080256245 | Method and system for information exchange utilizing an asynchronous persistent store protocol - Disclosed herein are embodiments of a method and system for facilitating the exchange of information between interconnected processorsin environments requiring high performance and high reliability. In an exemplary embodiment, the source sends input to the target and expects output from the target in return. A manager in communication with both the source and the target receives and initiates a storage of the information in nonvolatile memory. The manager concurrently forwards the information to its proper destination. If the manager receives output from the target before completion of the input storage, the manager cancels the input storage because it is no longer needed to ensure system reliability. If the manager receives acknowledgement from the source that the target output has been received before completion of the output storage, the manager cancels the output storage because it is no longer needed to ensure system reliability. Related embodiments are also described. | 10-16-2008 |
20080270523 | Grid-enabled, service-oriented architecture for enabling high-speed computing applications - Disclosed herein are systems and methods for a distributed computing system having a service-oriented architecture. The system is configured to receive workloads from client applications and to execute workloads on service hosts. The distributed computing system dynamically assigns the workloads to the applications running on the service hosts, with the workloads being assigned according to the service needs and the availability of service hosts and other resources on the system. The presently disclosed systems and methods provide for high-throughput communications through an asynchronous binary or a synchronous binary communications protocol. Further disclosed embodiments include flexible failover and upgrade techniques, isolation between execution users of the system, virtualization through mobility and the ability to grow and shrink assigned resources, and for a software development kit adapted for the present architecture. | 10-30-2008 |
20120197961 | METHOD AND SYSTEM FOR INFORMATION EXCHANGE UTILIZING AN ASYNCHRONOUS PERSISTENT STORE PROTOCOL - According to one aspect of the present disclosure, a method and technique for facilitating the exchange of information between interconnected computing entities is disclosed. The method includes: receiving from a client, by a workload manager, a workload unit of data in need of processing by the client; initiating by the workload manager a persistent storage of the workload unit of data received from the client; without waiting for the initiated storage of the workload unit of data to complete, sending by the workload manager the workload unit of data to a plurality of compute nodes; and responsive to receiving a result of a processing of the workload unit of data by one of the plurality compute nodes, canceling processing by the workload manager of the workload unit of data by a remainder of the plurality of compute nodes. | 08-02-2012 |
20120226811 | GRID-ENABLED, SERVICE-ORIENTED ARCHITECTURE FOR ENABLING HIGH-SPEED COMPUTING APPLICATIONS - According to one aspect of the present disclosure, a method and technique for data processing in a distributed computing system having a service-oriented architecture is disclosed. The method includes: receiving, by a workload input interface, workloads associated with an application from one or more clients for execution on the distributed computing system; identifying, by a resource management interface, available service hosts or service instances for computing the workloads received from the one or more clients; responsive to receiving an allocation request for the one or more hosts or service instances by the workload input interface, providing, by the resource management interface, address information of one or more workload output interfaces; and sending, by the one or more workload output interfaces, workloads received from the workload input interface to the one or more service instances. | 09-06-2012 |
20150227389 | INTERLEAVE-SCHEDULING OF CORRELATED TASKS AND BACKFILL-SCHEDULING OF DEPENDER TASKS INTO A SLOT OF DEPENDEE TASKS - Methods and arrangements for assembling tasks in a progressive queue. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task. The dependee tasks are assembled in a progressive queue for execution, and the dependee tasks are executed. Other variants and embodiments are broadly contemplated herein. | 08-13-2015 |
20150227394 | DETECTION OF TIME POINTS TO VOLUNTARILY YIELD RESOURCES FOR CONTEXT SWITCHING - Methods and arrangements for yielding resources in data processing. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task, and the at least one of the dependee set of tasks is executed. At least one resource of the at least one of the dependee set of tasks is yielded upon detection of resource underutilization in at least one other location. Other variants and embodiments are broadly contemplated herein. Other variants and embodiments are broadly contemplated herein. | 08-13-2015 |
20150227399 | MANAGING DATA SEGMENTS IN MEMORY FOR CONTEXT SWITCHING WITH STANDALONE FETCH AND MERGE SERVICES - Methods and arrangements for managing data segments. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task, and the at least one of the dependee set of tasks is executed. There is extracted, from the at least one of the dependee set of tasks, at least one service common to at least another of the dependee set of tasks. Other variants and embodiments are broadly contemplated herein. | 08-13-2015 |
20160062795 | MULTI-LAYER QOS MANAGEMENT IN A DISTRIBUTED COMPUTING ENVIRONMENT - A technique for multi-layer quality of service (QoS) management in a distributed computing environment includes: receiving a workload to run in a distributed computing environment; identifying a workload quality of service (QoS) class for the workload; translating the workload QoS class to a storage level QoS class; scheduling the workload to run on a compute node of the environment; communicating the storage level QoS class to a workload execution manager of the compute node; communicating the storage level QoS class to one or more storage managers of the environment, the storage managers managing storage resources in the environment; and extending, by the storage managers, the storage level QoS class to the storage resources to support the workload QoS class. | 03-03-2016 |
20160065492 | MULTI-LAYER QOS MANAGEMENT IN A DISTRIBUTED COMPUTING ENVIRONMENT - A system for multi-layer quality of service (QoS) management in a distributed computing environment includes: a management node hosting a workload scheduler operable to receive a workload and identify a workload QoS class for the workload; and a plurality of distributed compute nodes, the workload scheduler operable to schedule running of the workload on the compute nodes. The workload scheduler is operable to: translate the workload QoS class to a storage level QoS class; communicate the storage level QoS class to a workload execution manager of the compute nodes; and communicate the storage level QoS class to one or more storage managers, the storage managers managing storage resources. The storage managers are operable to extend the storage level QoS class to the storage resources to support the workload QoS class. | 03-03-2016 |
Patent application number | Description | Published |
20150143381 | COMPUTING SESSION WORKLOAD SCHEDULING AND MANAGEMENT OF PARENT-CHILD TASKS - A single workload scheduler schedules sessions and tasks having a tree structure to resources, wherein the single workload scheduler has scheduling control of the resources and the tasks of the parent-child workload sessions and tasks. The single workload scheduler receives a request to schedule a child session created by a scheduled parent task that when executed results in a child task; the scheduled parent task is dependent on a result of the child task. The single workload scheduler receives a message from the scheduled parent task yielding a resource based on the resource not being used by the scheduled parent task, schedules tasks to backfill the resource, and returns the resource yielded by the scheduled parent task to the scheduled parent task based on receiving a resume request from the scheduled parent task or determining dependencies of the scheduled parent task have been met. | 05-21-2015 |
20150149632 | MINIMIZING SERVICE RESTART BY OPTIMALLY RESIZING SERVICE POOLS - A method, computer program product, and system for optimizing service pools supporting resource sharing and enforcing SLAs, to minimize service restart. A computer processor determines a first resource to be idle, wherein a service instance continues to occupy the first resource that is idle. The processor adds the first resource to a resource pool, wherein the service instance continues to occupy the first resource as a global standby service instance on the first resource. The processor receives a request for a resource, wherein the request for the resource includes a global name associated with a service that corresponds to the global standby service instance, and the processor allocates, from the resource pool, the first resource having the global standby service instance, based on the request for the resource that includes the global name associated with the service corresponding to the global standby service instance. | 05-28-2015 |
20150149637 | MINIMIZING SERVICE RESTART BY OPTIMALLY RESIZING SERVICE POOLS - A method, computer program product, and system for optimizing service pools supporting resource sharing and enforcing SLAs, to minimize service restart. A computer processor determines a first resource to be idle, wherein a service instance continues to occupy the first resource that is idle. The processor adds the first resource to a resource pool, wherein the service instance continues to occupy the first resource as a global standby service instance on the first resource. The processor receives a request for a resource, wherein the request for the resource includes a global name associated with a service that corresponds to the global standby service instance, and the processor allocates, from the resource pool, the first resource having the global standby service instance, based on the request for the resource that includes the global name associated with the service corresponding to the global standby service instance. | 05-28-2015 |
Patent application number | Description | Published |
20150143380 | SCHEDULING WORKLOADS AND MAKING PROVISION DECISIONS OF COMPUTER RESOURCES IN A COMPUTING ENVIRONMENT - Embodiments of the present invention disclose a computer-implemented method, computer program product, and system for workload scheduling and resource provisioning. In one embodiment, in accordance with the present invention, the computer implemented method includes the steps of scheduling a set of pending workloads for execution on computer resources in a computing environment; identifying a workload in the set of pending workloads that is scheduled to utilize hypothetic resources, wherein hypothetic resources are idle computer resources that are currently not available, but can be made available to execute workloads through provisioning actions; holding the identified workload from dispatch to hypothetic resources for a holding period, wherein the holding period is a customizable duration of time; provisioning the hypothetic resources corresponding to computer resource requirements of the identified workload; determining whether the provisioned hypothetic resources have become available during the holding period. | 05-21-2015 |
20150143381 | COMPUTING SESSION WORKLOAD SCHEDULING AND MANAGEMENT OF PARENT-CHILD TASKS - A single workload scheduler schedules sessions and tasks having a tree structure to resources, wherein the single workload scheduler has scheduling control of the resources and the tasks of the parent-child workload sessions and tasks. The single workload scheduler receives a request to schedule a child session created by a scheduled parent task that when executed results in a child task; the scheduled parent task is dependent on a result of the child task. The single workload scheduler receives a message from the scheduled parent task yielding a resource based on the resource not being used by the scheduled parent task, schedules tasks to backfill the resource, and returns the resource yielded by the scheduled parent task to the scheduled parent task based on receiving a resume request from the scheduled parent task or determining dependencies of the scheduled parent task have been met. | 05-21-2015 |
20150150017 | OPTIMIZATION OF MAP-REDUCE SHUFFLE PERFORMANCE THROUGH SHUFFLER I/O PIPELINE ACTIONS AND PLANNING - A shuffler receives information associated with partition segments of map task outputs and a pipeline policy for a job running on a computing device. The shuffler transmits to an operating system of the computing device a request to lock partition segments of the map task outputs and transmits an advisement to keep or load partition segments of map task outputs in the memory of the computing device. The shuffler creates a pipeline based on the pipeline policy, wherein the pipeline includes partition segments locked in the memory and partition segments advised to keep or load in the memory, of the computing device for the job, and the shuffler selects the partition segments locked in the memory, followed by partition segments advised to keep or load in the memory, as a preferential order of partition segments to shuffle. | 05-28-2015 |
20150150018 | OPTIMIZATION OF MAP-REDUCE SHUFFLE PERFORMANCE THROUGH SHUFFLER I/O PIPELINE ACTIONS AND PLANNING - A shuffler receives information associated with partition segments of map task outputs and a pipeline policy for a job running on a computing device. The shuffler transmits to an operating system of the computing device a request to lock partition segments of the map task outputs and transmits an advisement to keep or load partition segments of map task outputs in the memory of the computing device. The shuffler creates a pipeline based on the pipeline policy, wherein the pipeline includes partition segments locked in the memory and partition segments advised to keep or load in the memory, of the computing device for the job, and the shuffler selects the partition segments locked in the memory, followed by partition segments advised to keep or load in the memory, as a preferential order of partition segments to shuffle. | 05-28-2015 |
20150227389 | INTERLEAVE-SCHEDULING OF CORRELATED TASKS AND BACKFILL-SCHEDULING OF DEPENDER TASKS INTO A SLOT OF DEPENDEE TASKS - Methods and arrangements for assembling tasks in a progressive queue. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task. The dependee tasks are assembled in a progressive queue for execution, and the dependee tasks are executed. Other variants and embodiments are broadly contemplated herein. | 08-13-2015 |
20150227394 | DETECTION OF TIME POINTS TO VOLUNTARILY YIELD RESOURCES FOR CONTEXT SWITCHING - Methods and arrangements for yielding resources in data processing. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task, and the at least one of the dependee set of tasks is executed. At least one resource of the at least one of the dependee set of tasks is yielded upon detection of resource underutilization in at least one other location. Other variants and embodiments are broadly contemplated herein. Other variants and embodiments are broadly contemplated herein. | 08-13-2015 |
20150227399 | MANAGING DATA SEGMENTS IN MEMORY FOR CONTEXT SWITCHING WITH STANDALONE FETCH AND MERGE SERVICES - Methods and arrangements for managing data segments. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task, and the at least one of the dependee set of tasks is executed. There is extracted, from the at least one of the dependee set of tasks, at least one service common to at least another of the dependee set of tasks. Other variants and embodiments are broadly contemplated herein. | 08-13-2015 |
20160062795 | MULTI-LAYER QOS MANAGEMENT IN A DISTRIBUTED COMPUTING ENVIRONMENT - A technique for multi-layer quality of service (QoS) management in a distributed computing environment includes: receiving a workload to run in a distributed computing environment; identifying a workload quality of service (QoS) class for the workload; translating the workload QoS class to a storage level QoS class; scheduling the workload to run on a compute node of the environment; communicating the storage level QoS class to a workload execution manager of the compute node; communicating the storage level QoS class to one or more storage managers of the environment, the storage managers managing storage resources in the environment; and extending, by the storage managers, the storage level QoS class to the storage resources to support the workload QoS class. | 03-03-2016 |
20160065492 | MULTI-LAYER QOS MANAGEMENT IN A DISTRIBUTED COMPUTING ENVIRONMENT - A system for multi-layer quality of service (QoS) management in a distributed computing environment includes: a management node hosting a workload scheduler operable to receive a workload and identify a workload QoS class for the workload; and a plurality of distributed compute nodes, the workload scheduler operable to schedule running of the workload on the compute nodes. The workload scheduler is operable to: translate the workload QoS class to a storage level QoS class; communicate the storage level QoS class to a workload execution manager of the compute nodes; and communicate the storage level QoS class to one or more storage managers, the storage managers managing storage resources. The storage managers are operable to extend the storage level QoS class to the storage resources to support the workload QoS class. | 03-03-2016 |