Patent application number | Description | Published |
20130080627 | SYSTEM AND METHOD FOR SURGE PROTECTION AND RATE ACCELERATION IN A TRAFFIC DIRECTOR ENVIRONMENT - Described herein are systems and methods for use with a load balancer or traffic director, and administration thereof. In accordance with an embodiment the system comprises a traffic director having one or more traffic director instances, which is configured to receive and communicate requests, from clients, to origin servers having one or more pools of servers. A traffic monitor, at the traffic director, monitors traffic, including the number of connections, between the traffic director instances and one or more of the resource servers within the origin servers. The traffic director can set a traffic rate which controls the traffic, including the number of connections, to provide surge protection, or rate acceleration/deceleration. | 03-28-2013 |
20130080628 | SYSTEM AND METHOD FOR DYNAMIC DISCOVERY OF ORIGIN SERVERS IN A TRAFFIC DIRECTOR ENVIRONMENT - Described herein are systems and methods for use with a load balancer or traffic director, and administration thereof, wherein the traffic director is provided as a software-based load balancer that can be used to deliver a fast, reliable, scalable, and secure platform for load-balancing Internet and other traffic to back-end origin servers, such as web servers, application servers, or other resource servers. In accordance with an embodiment, the system comprises a traffic director having one or more traffic director instances, which is configured to receive and communicate requests, from clients, to origin servers having one or more pools of servers. A health check subsystem periodically checks the health of its configured resource servers, and also attempts to detect changes in the one or more pools, by sending requests to any new server instances configured as origin servers within the pool, receiving appropriate responses, and updating the configuration accordingly. | 03-28-2013 |
Patent application number | Description | Published |
20100329846 | TURBINE ENGINE COMPONENTS - A turbine engine component includes a wall, a main opening, and two clusters of two or more auxiliary openings. The wall includes cool and hot air sides. The main opening extends between the cool air side and the hot air side and has an inlet and an outlet. The inlet is formed on the cool air side, and the outlet is formed on the hot air side. The first cluster of two or more auxiliary openings extends from the main opening to the hot air side. The second cluster of two or more auxiliary openings extends from the main opening to the hot air side. The main opening may be cylindrical or conical with a converging passage extending from the cool air side to the hot air side. The converging main opening may enhance flow through the auxiliary openings especially at high blowing ratios. | 12-30-2010 |
20110123312 | GAS TURBINE ENGINE COMPONENTS WITH IMPROVED FILM COOLING - An engine component includes a body; and a plurality of cooling holes formed in the body. At least one of the cooling holes has cross-sectional shape with a first concave portion and a first convex portion. | 05-26-2011 |
20110311369 | GAS TURBINE ENGINE COMPONENTS WITH COOLING HOLE TRENCHES - An engine component includes a body having an interior surface and an exterior surface; a cooling hole formed in the body and extending from the interior surface to the exterior surface; and a concave trench extending from the cooling hole at the exterior surface of the body in a downstream direction. | 12-22-2011 |
20130034433 | INTER-TURBINE DUCTS WITH GUIDE VANES - A turbine section of a gas turbine engine is provided. The turbine section is annular about a longitudinal axis and includes first turbine with a first inlet and a first outlet; a second turbine with a second inlet and a second outlet; an inter-turbine duct extending from the first outlet to the second inlet and configured to direct an air flow from the first turbine to the second turbine; and a first guide vane disposed within the inter-turbine duct. | 02-07-2013 |
20130294908 | INTER-TURBINE DUCTS WITH VARIABLE AREA RATIOS - A turbine section of a gas turbine engine is annular about a longitudinal axis. The turbine section includes a first turbine with a first inlet and a first outlet; a second turbine with a second inlet and a second outlet; and an inter-turbine duct extending from the first outlet to the second inlet and configured to direct an air flow from the first turbine to the second turbine. The inter-turbine duct has a first station with a first meridional area, a second station with a second meridional area, and a third station with a third meridional area. The first station is upstream of the second station and the second station is upstream of the third station, and the second meridional area is less than or equal to the first meridional area. | 11-07-2013 |
20130315710 | GAS TURBINE ENGINE COMPONENTS WITH COOLING HOLE TRENCHES - An engine component includes a body having an interior surface and an exterior surface; a cooling hole formed in the body and extending from the interior surface; and a nonconcave trench extending from the cooling hole to the exterior surface of the body in a downstream direction such that cooling air flow from within the body flow through the cooling hole, through the trench, and onto the exterior surface. | 11-28-2013 |
20140154096 | TURBINE BLADE AIRFOILS INCLUDING SHOWERHEAD FILM COOLING SYSTEMS, AND METHODS FOR FORMING AN IMPROVED SHOWERHEAD FILM COOLED AIRFOIL OF A TURBINE BLADE - Turbine blade airfoils, showerhead film cooling systems thereof, and methods for cooling the turbine blade airfoils using the same are provided. The airfoil has a leading edge and a trailing edge, a pressure sidewall and a suction sidewall both extending between the leading and the trailing edges, and an internal cavity for supplying cooling air. A showerhead of film cooling holes is connected to the internal cavity. Each film cooling hole has an inlet connected to the internal cavity and an outlet opening onto an external wall surface at the leading edge of the airfoil. A plurality of surface connectors is formed in the external wall surface. Each surface connector of the plurality of surface connectors interconnects the outlets of at least one selected pair of the film cooling holes. | 06-05-2014 |
20140356188 | TURBINE BLADE AIRFOILS INCLUDING FILM COOLING SYSTEMS, AND METHODS FOR FORMING AN IMPROVED FILM COOLED AIRFOIL OF A TURBINE BLADE - Turbine blade airfoils, film cooling systems thereof, and methods for forming improved film cooled components are provided. The turbine blade airfoil has an external wall surface and comprises leading and trailing edges, pressure and suction sidewalls both extending between the leading and the trailing edges, an internal cavity, one or more isolation trenches in the external wall surface, a plurality of film cooling holes arranged in cooling rows, and a plurality of span-wise surface connectors interconnecting the outlets of the film cooling holes in the same cooling row to form a plurality of rows of interconnected film cooling holes. Each film cooling hole has an inlet connected to the internal cavity and an outlet opening onto the external wall surface. The span-wise surface connectors in at least one selected row of interconnected film cooling holes are disposed in the one or more isolation trenches. | 12-04-2014 |
Patent application number | Description | Published |
20090323951 | PROCESS, CIRCUITS, DEVICES, AND SYSTEMS FOR ENCRYPTION AND DECRYPTION AND OTHER PURPOSES, AND PROCESS MAKING - A wireless communications device ( | 12-31-2009 |
20100283439 | EFFICIENT SWITCH CASCODE ARCHITECTURE FOR SWITCHING DEVICES - Efficient switch cascode architecture for switching devices, such as switching regulators. The cascode architecture includes a switching stage responsive to an external driver signal for switching transitions, and a bias generator operative to bias the cascode transistor of the switching stage to protect the switching stage from damage during the switching transitions. | 11-11-2010 |
20120030447 | PROCESS, CIRCUITS, DEVICES, AND SYSTEMS FOR ENCRYPTION AND DECRYPTION AND OTHER PURPOSES, AND PROCESSES OF MAKING - A wireless communications device ( | 02-02-2012 |
20140091774 | SPREAD-SPECTRUM SWITCHING REGULATOR FOR ELIMINATING MODULATION RIPPLE - A spread-spectrum switching regulator for eliminating modulation ripple includes high gain amplifier that is responsive to reference voltage and feedback voltage of feedback loop to generate differential voltage, the feedback voltage being one of output voltage of the spread-spectrum switching regulator and a fraction of the output voltage; compensation circuit, coupled to the high gain amplifier, that maintains stability of the feedback loop to generate error level voltage in response to differential voltage; ramp generator that generates ramp waveform with slope adaptable to switching frequency to maintain duty cycle at constant value; pulse width modulator, coupled to compensation circuit and ramp generator, that compares error level voltage and ramp waveform to generate pulsed waveform; driver circuit, coupled to pulse width modulator, that drives the pulsed waveform to alternately switch a pair of transistors; and LC network, coupled to the pair of transistors, to average pulsed waveform to the output voltage. | 04-03-2014 |
Patent application number | Description | Published |
20110276447 | METHOD AND SYSTEM FOR PROVIDING REAL-TIME COMMUNICATIONS SERVICES - The present invention provides a method and a system for providing at least one communications service to one or more service providers by a communications service provider. Communications capabilities of the communications service provider are sliced into a plurality of virtual slices and each of the plurality of virtual slices is configured for a different service provider from among the one or more service providers. At least one communications service is provided to each of the one or more service providers through a respective configured virtual slice by the communications service provider. Each of the one or more service providers further provides the communications service to a user through the respective configured virtual slice in collaboration with the communications service provider. | 11-10-2011 |
20130039772 | SYSTEM AND METHOD FOR CONTROLLING FLOW IN TURBOMACHINERY - A system includes a turbine. The turbine includes a first turbine blade comprising a leading edge, a blade platform coupled to the first turbine blade, and a protrusion disposed on the blade platform adjacent the leading edge of the first turbine blade. The protrusion is configured to increase a first static pressure of a cooling flow near the leading edge above a second static pressure of a hot gas flow near the leading edge. | 02-14-2013 |
20130080645 | METHOD AND SYSTEM FOR PROVIDING REAL-TIME COMMUNICATIONS SERVICES - The present invention provides a method and a system for providing at least one communications service to one or more service providers by a communications service provider. Communications capabilities of the communications service provider are sliced into a plurality of virtual slices and each of the plurality of virtual slices is configured for a different service provider from among the one or more service providers. At least one communications service is provided to each of the one or more service providers through a respective configured virtual slice by the communications service provider. Each of the one or more service providers further provides the communications service to a user through the respective configured virtual slice in collaboration with the communications service provider. | 03-28-2013 |
20130084830 | METHOD AND SYSTEM FOR PROVIDING REAL-TIME COMMUNICATION SERVICES - The present invention provides a method and a system for providing at least one communications service to one or more service providers by a communications service provider. Communications capabilities of the communications service provider are sliced into a plurality of virtual slices and each of the plurality of virtual slices is configured for a different service provider from among the one or more service providers. At least one communications service is provided to each of the one or more service providers through a respective configured virtual slice by the communications service provider. Each of the one or more service providers further provides the communications service to a user through the respective configured virtual slice in collaboration with the communications service provider. | 04-04-2013 |
20130170983 | TURBINE ASSEMBLY AND METHOD FOR REDUCING FLUID FLOW BETWEEN TURBINE COMPONENTS - According to one aspect of the invention, a turbine assembly includes a stator and a rotor adjacent to the stator. The turbine assembly also includes a passage formed in a projection from the rotor to form a fluid curtain between the rotor and stator, wherein the fluid curtain reduces a flow of fluid into a hot gas path. | 07-04-2013 |
20130336323 | OPTIMIZED BI-DIRECTIONAL COMMUNICATION IN AN INFORMATION CENTRIC NETWORK - Embodiments describe enhancing bi-directional communication in an information centric computer network through a piggyback session, which comprises mapping requests for data received to content, sending at least one piggyback packet to a remote node, wherein a piggyback packet is a data packet comprising a request field and a content field, receiving at least one piggyback packet from the remote node, and processing the piggyback packets. Processing may comprise splitting the request field and the content field in at least one received piggyback packet, sending the content extracted from the received piggyback packet to a client application running on the computing apparatus and setting one or more events to trigger the processing of at least one piggyback packet. Additional embodiments describe the structure of a piggyback packet and the management of a piggyback session at a router device by validating incoming piggyback packets and determining a recipient accordingly. | 12-19-2013 |
20140020102 | INTEGRATED NETWORK ARCHITECTURE - An integrated network architecture can provide information centric and Internet Protocol processing. The integrated network architecture can comprise a packet core that supports packet processing for information centric network packets and Internet Protocol packets, a service core that comprises services supporting a plurality of different operation modes that can be enabled and disabled independently (including an access operation mode, an edge operation mode, a core operation mode, and a proxy operation mode), a client management service that supports network client mobility between network devices, and/or a cache management service that supports cache lookup and cache update services. | 01-16-2014 |
20140193243 | SEAL ASSEMBLY FOR TURBINE SYSTEM - The disclosure includes a sealing assembly for a turbine system. In one embodiment, the sealing assembly is for a turbine having a rotor blade and a stator nozzle. The sealing assembly includes a pair of oppositely facing seal teeth including concave surfaces. The pair of oppositely facing seal teeth are positioned on one of the rotor blade and the stator nozzle, and are for sealingly engaging the other of the rotor blade and the stator nozzle during operation of the turbine. | 07-10-2014 |
20140234070 | Systems and Methods for Facilitating Onboarding of Bucket Cooling Flows - Systems and methods Embodiments for facilitating onboarding of a cooling fluid are disclosed herein. According to one embodiment, a system may include a rotor assembly, a wheel space cavity adjacent to the rotor assembly, and a bucket shank cavity in fluid communication with the wheel space cavity. The system may also include at least one protrusion disposed on the rotor assembly in the wheel space cavity. The at least one protrusion may be configured to direct the cooling fluid radially from the wheel space cavity to the bucket shank cavity to minimize pressure loss in the cooling fluid. | 08-21-2014 |
20140380427 | METHODS FOR DETERMINING AUTHENTICATION REQUIREMENTS OF INFORMATION CENTRIC NETWORK BASED SERVICES AND DEVICES THEREOF - A method, device, and non-transitory computer readable medium for determining and representing one or more authentication requirements for at least one valid service flow of one or more information centric network (ICN) based services. This technique involves capturing service specification and storing it in a repository. Then, one or more possible service flows are generated and represented based on the nature of contents, delivery options and preferred architecture. This representation is again modified based on the trust level among functional entities and authentication scope which are inferred from the service specification. The final representation of the service flow shows only the valid inter-connections and operations among functional entities and the service flow is constrained by authentication requirement. | 12-25-2014 |
20150100608 | RECONFIGURING AN ASIC AT RUNTIME - Methods for reconfiguring an ASIC at runtime without using voltage over scaling. A functional criticality of a set of logic in the ASIC is identified. Then, the set of logic are classified into a set of regions based on the functional criticality, each region of the set of regions having a target error threshold. Further, each region is power gated at runtime based on the functional criticality such that the target error threshold is achieved without using voltage over scaling. | 04-09-2015 |
Patent application number | Description | Published |
20110214426 | TURBINE SYSTEM INCLUDING VALVE FOR LEAK OFF LINE FOR CONTROLLING SEAL STEAM FLOW - A turbine system includes a valve coupled to a leak off line from a leak packing of a first turbine, the valve controlling a first steam flow used to maintain a constant self-sustaining sealing pressure to a second turbine across numerous loading conditions. A related method is also provided. | 09-08-2011 |
20110247333 | DOUBLE FLOW LOW-PRESSURE STEAM TURBINE - A double flow low-pressure (LP) steam turbine with an LP section that can be engaged and disengaged from a drive train is provided, as are methods for its use. In one embodiment, the invention provides a steam turbine comprising: a high pressure (HP) section; an intermediate pressure (IP) section adjacent the HP section; a first low pressure (LP) section; a second LP section; a crossover pipe connecting the IP section to the first LP section and the second LP section; a drive train extending through the HP section, the IP section, the first LP section, and the second LP section; a device for engaging and disengaging the second LP section from the drive train; a valve for alternately opening and closing a portion of the crossover pipe connecting the IP section to the second LP section; and at least one extraction port for extracting a quantity of steam from at least one of the following: the crossover pipe or an exhaust of the IP section. | 10-13-2011 |
20120027582 | FLOATING PACKING RING ASSEMBLY - A packing ring assembly for use between a rotating and a stationary component in a turbomachine is disclosed, the assembly including an arcuate packing ring casing, an arcuate packing ring segment positioned at least partially within the packing ring casing, and a resistance component configured to allow movement of the packing ring segment in an axial direction, relative to the rotating component, between a first and second position in response to a pressure condition. In one embodiment, the resistance component allows movement when the pressure condition comprises approximately 30% of the turbomachine load. Also disclosed is an altered surface topography of the rotating component to accommodate variable length teeth extending from the packing ring segment, such that when the packing ring segment is in the first position, a clearance between the packing ring segment and the rotating component is larger than when the packing ring segment is in the second position. | 02-02-2012 |
20120324862 | SYSTEMS AND METHODS FOR STEAM TURBINE WHEEL SPACE COOLING - The present application provides a steam turbine system. The steam turbine system may include a high pressure section, an intermediate pressure section, a shaft packing location positioned between the high pressure section and the intermediate pressure section, a source of steam, and a cooling system. The cooling system delivers a cooling steam extraction from the source of steam to the shaft packing location so as to cool the high pressure section and the intermediate pressure section. | 12-27-2012 |
Patent application number | Description | Published |
20130295614 | NUCLEOTIDE SEQUENCES, METHODS, KIT AND A RECOMBINANT CELL THEREOF - The present disclosure relates to recombinant adeno-associated virus (AAV) vector serotype, wherein the capsid protein of AAV serotypes is mutated at single or multiple sites. The disclosure further relates to an improved transduction efficiency of these mutant AAV serotypes. The AAV serotypes disclosed are AAV1, AAV2, AAV3, AAV4, AAV5, AAV6, AAV7, AAV8, AAV9, AAV10. The instant disclosure relates to nucleotide sequences, recombinant vector, methods and kit thereof. | 11-07-2013 |
20140162319 | NUCLEOTIDE SEQUENCES, METHODS, KIT AND A RECOMBINANT CELL THEREOF - The present disclosure relates to recombinant adeno-associated virus (AAV) vector serotype, wherein the capsid protein of AAV serotypes is mutated at single or multiple sites. The disclosure further relates to an improved transduction efficiency of these mutant AAV serotypes. The AAV serotypes disclosed are AAV1, AAV2, AAV3, AAV4, AAV5, AAV6, AAV7, AAV8, AAV9, AAV10. The instant disclosure relates to nucleotide sequences, recombinant vector, methods and kit thereof. | 06-12-2014 |
Patent application number | Description | Published |
20130028267 | Sharing A Transmission Control Protocol Port By A Plurality Of Applications - Methods, apparatuses, and computer program products for sharing a transmission control protocol (TCP) port by a plurality of applications are provided. Embodiments include receiving, by a transmission controller from a client, a first TCP packet that includes an indication of a new TCP connection for a TCP port; determining, by the transmission controller, an origination of the first TCP packet; identifying, by the transmission controller, a TCP sequence number range associated with the determined origination; selecting, by the transmission controller, an initial sequence number (ISN) within the identified TCP sequence number range; and sending, by the transmission controller to the client, a second TCP packet that includes the selected ISN. | 01-31-2013 |
20130031254 | Sharing A Transmission Control Protocol Port By A Plurality Of Applications - Methods, apparatuses, and computer program products for sharing a transmission control protocol (TCP) port by a plurality of applications are provided. Embodiments include receiving, by a transmission controller from a client, a first TCP packet that includes an indication of a new TCP connection for a TCP port; determining, by the transmission controller, an origination of the first TCP packet; identifying, by the transmission controller, a TCP sequence number range associated with the determined origination; selecting, by the transmission controller, an initial sequence number (ISN) within the identified TCP sequence number range; and sending, by the transmission controller to the client, a second TCP packet that includes the selected ISN. | 01-31-2013 |
20140281671 | ENHANCED FAILOVER MECHANISM IN A NETWORK VIRTUALIZED ENVIRONMENT - An embodiment of the invention is associated with a virtualized environment that includes a hypervisor, client LPARs, and virtual servers that each has a SEA, wherein one SEA is selected to be primary SEA for connecting an LPAR and specified physical resources. A first SEA of a virtual server sends a call to the hypervisor, and in response the hypervisor enters physical adapter capability information, contained in the call and pertaining to the first SEA, into a table. Further in response to receiving the call, the hypervisor decides whether or not the first SEA of the virtual server should then be the primary SEA. The hypervisor sends a return call indicating its decision to the first SEA. | 09-18-2014 |
20140281701 | ENHANCED FAILOVER MECHANISM IN A NETWORK VIRTUALIZED ENVIRONMENT - An embodiment of the invention is associated with a virtualized environment that includes a hypervisor, client LPARs, and virtual servers that each has a SEA, wherein one SEA is selected to be primary SEA for connecting an LPAR and specified physical resources. A first SEA of a virtual server sends a call to the hypervisor, and in response the hypervisor enters physical adapter capability information, contained in the call and pertaining to the first SEA, into a table. Further in response to receiving the call, the hypervisor decides whether or not the first SEA of the virtual server should then be the primary SEA. The hypervisor sends a return call indicating its decision to the first SEA. | 09-18-2014 |
Patent application number | Description | Published |
20090018671 | METHOD AND SYSTEM FOR PROCESS CONTROL - A method and system for process control. The control system can be operably coupled to a processing system. The control system can include control devices operably coupled to the processing system; a modeling module to provide a linear model based at least in part on the processing system; a computational module to provide controller algorithms associated with the control devices; a user interface module to present at a user interface controller information based at least in part on the linear model and the controller algorithms; and a separate coordination module for establishing communication between the modeling module, the computational module and the user interface module. One or more control signals can be provided to at least one of the control devices for controlling the processing system. In one embodiment, the modeling module can generate the linear model from a non-linear process. | 01-15-2009 |
20090043546 | METHOD AND SYSTEM FOR PROCESS CONTROL - A method and system for process control using a model predictive controller. The control system can have one or more control devices operably coupled to a processing system for controlling a process of the processing system; a modeling tool to provide a non-linear model based at least in part on the process and to provide a plurality of linearized models based at least in part on the non-linear model, where the plurality of linearized models are linearized at different linearization rates; and a controller operably coupled to the modeling tool. The controller can select one of the plurality of linearized models based on a comparison of the plurality of linearized models with a reference model. The controller can send one or more control signals to at least one of the one or more control devices. The one or more control signals can be determined using the selected one of the plurality of linearized models. | 02-12-2009 |
20110251700 | SYSTEM AND METHOD FOR SOLVING CHEMICAL ENGINEERING EQUATIONS AND MODEL DEVELOPMENT USING EQUATION EDITOR - A system includes a process controller and an equation evaluation apparatus. The equation evaluation apparatus includes an equation editor, a model factory, and an equation evaluation engine. The equation editor is adapted to receive equations describing a process to be controlled by the process controller. The equation editor is also adapted to generate model information representing the equations. The model factory is adapted to receive the model information and generate an equation stack representing the equations. The equation evaluation engine is adapted to receive evaluation information from the process controller, evaluate at least one of the equations using the evaluation information and the equation stack, and send a result of the evaluation to the process controller. The model information could include information representing algebraic equations, differential equations, algebraic states, differential states, inputs, parameters, constants, and/or expressions. | 10-13-2011 |
20130030554 | INTEGRATED LINEAR/NON-LINEAR HYBRID PROCESS CONTROLLER - A model predictive controller (MPC) for controlling physical processes includes a non-linear control section that includes a memory that stores a non-linear (NL) model that is coupled to a linearizer that provides at least one linearized model, and a linear control section that includes a memory that stores a linear model. A controller engine is coupled to receive both the linearized model and linear model. The MPC includes a switch that in one position causes the controller engine to operate in a linear mode utilizing the linear model to implement linear process control and in another position causes the controller engine to operate in a NL mode utilizing the linearized model to implement NL process control. The switch can be an automatic switch configured for automatically switching between linear process control and NL process control. | 01-31-2013 |
20150300674 | CONTROLLER AND LOOP PERFORMANCE MONITORING IN A HEATING, VENTILATING, AND AIR CONDITIONING SYSTEM - A controller and loop performance monitoring system is coupled to a controller, detects loop performance degradation in time, and diagnoses a cause of the loop performance degradation. If the cause of loop performance degradation is poor controller tuning, a re-tuning mechanism is triggered. If the cause of loop performance degradation is external to the controller (a disturbance acting on the loop, hardware malfunction etc.), an action defined in control strategy is taken, or the user is informed via alarm, user interface, or upper layer software that collects the performance measures. The monitoring itself is designed to be recursive and with low memory demands, so it can be implemented directly in the controller, without need for data transfer and storage. The monitoring is modular, consisting of oscillation detection and diagnosis part, performance indices part, internal logic part, and triggering part, easily extensible by other performance indices or parts (e.g. for overshoot monitoring). The oscillation detection and diagnosis part includes controller output oscillation monitoring, the performance indices part includes predictability index and offset index. The outputs of the controller and loop performance monitoring are overall loop performance together with loop diagnosis information, and overall controller performance together with controller diagnosis. The outputs of the controller and loop performance monitoring are used as parts of controller and loop performance monitoring user interface. | 10-22-2015 |
20150309506 | APPARATUS AND METHOD FOR PROVIDING A GENERALIZED CONTINUOUS PERFORMANCE INDICATOR - A method includes, using at least one processing device, obtaining multiple diagnostic indicators associated with at least a portion of an industrial process system and combining the diagnostic indicators to form a generalized indicator. Each diagnostic indicator has a value, and the generalized indicator is associated with a position on a continuous scale. The continuous scale could include a color gradient, and the method could include displaying the generalized indicator along the color gradient with a color based on its position. Multiple generalized indicators associated with multiple portions of the process system could be displayed within a torus or circle, and different portions of the torus or circle can be associated with different portions of the process system. Different concentric tori or circles could be associated with different periods of time, and at least one concentric torus or circle could identify a predicted behavior of the process system. | 10-29-2015 |
Patent application number | Description | Published |
20130114715 | Delayed Duplicate I-Picture for Video Coding - A method is provided that includes receiving pictures of a video sequence in a video encoder, and encoding the pictures to generate a compressed video bit stream that is transmitted to a video decoder in real-time, wherein encoding the pictures includes selecting a picture to be encoded as a delayed duplicate intra-predicted picture (DDI), wherein the picture would otherwise be encoded as an inter-predicted picture (P-picture), encoding the picture as an intra-predicted picture (I-picture) to generate the DDI, wherein the I-picture is reconstructed and stored for use as a reference picture for a decoder refresh picture, transmitting the DDI to the video decoder in non-real time, selecting a subsequent picture to be encoded as the decoder refresh picture, and encoding the subsequent picture in the compressed bit stream as the decoder refresh picture, wherein the subsequent P-picture is encoded as a P-picture predicted using the reference picture. | 05-09-2013 |
20130272429 | Color Component Checksum Computation in Video Coding - Checksum computation for video coding is provided that breaks the dependency between the color components of a picture in the prior art. More specifically, rather than computing a single checksum for a picture as in the prior art, a separate checksum is computed for each color component. Computing a separate checksum for each color component enables parallel computation of the component checksums. Methods are provided for computing three separate checksums after a picture is decoded. Methods are also provided for computing three separate checksums on a largest coding unit basis, thus allowing the checksums for a picture to be computed as the picture is being decoded. | 10-17-2013 |
20140010293 | METHOD AND SYSTEM FOR VIDEO PICTURE INTRA-PREDICTION ESTIMATION - Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block. | 01-09-2014 |
Patent application number | Description | Published |
20100060749 | REDUCING DIGITAL IMAGE NOISE - Devices, systems, methods, and other embodiments associated with reducing digital image noise are described. In one embodiment, a method includes determining, on a per pixel basis, mosquito noise values associated with pixels of a digital image. The method determines, on a per pixel basis, block noise values associated with the digital image. The method filters the digital image with a plurality of adaptive filters. A compression artifact in the digital image is reduced. The compression artifact is reduced by combining filter outputs from the plurality of adaptive filters. The filter outputs are combined based, at least in part, on the mosquito noise values and the block noise values. | 03-11-2010 |
20100134496 | BIT RESOLUTION ENHANCEMENT - Devices, systems, apparatuses, methods, and other embodiments associated with bit resolution enhancement are described. In one embodiment, an apparatus includes logic configured to produce a high-resolution pixel from a low-resolution pixel. The apparatus includes logic configured to classify the high-resolution pixel as being in a smooth region of an image based on at least one of a gradient value and a variance value associated with the low-resolution pixel. The apparatus includes logic configured to selectively re-classify the high-resolution pixel as not being in the smooth region of the image based on a set of neighboring high-resolution pixels associated with high-resolution pixel. The apparatus includes logic configured to selectively filter the high-resolution pixel based on whether the high-resolution pixel remains classified as being in the smooth region of the image. | 06-03-2010 |
20120027103 | BLOCK NOISE DETECTION IN DIGITAL VIDEO - Systems and methods are provided for determining characteristics of video data. A frame of video data is obtained, where the frame is represented by pixel data. A value is assigned to an element of a detection array based on pixel data in a portion of the video frame corresponding to the element. A frequency transform of values of the detection array is determined, and a characteristic of the video data is extracted based on the output of the frequency transform. | 02-02-2012 |
20140056537 | REDUCING DIGITAL IMAGE NOISE - Devices, systems, methods, and other embodiments associated with reducing digital image noise are described. In one embodiment, a method includes filtering a digital image with a plurality of adaptive filters, wherein the plurality of adaptive filters include a first filter configured to filter noise surrounding one or more edges in the digital image, and a second filter configured to filter noise caused by a block based encoding of the digital image. The method further includes reducing a compression artifact from selected pixels in the digital image, wherein the compression artifact is reduced by (i) combining an output from the first filter and an output from the second filter in response to the digital image being determined to be blocky, and (ii) not combining the output from the first filter with the output of the second filter in response to the digital image not being determined to be blocky. | 02-27-2014 |
Patent application number | Description | Published |
20090319623 | RECIPIENT-DEPENDENT PRESENTATION OF ELECTRONIC MESSAGES - A message originator, such as an author of an email, can designate “section access settings” which can selectively permit or deny access of portions of the email's content. Recipients who are not authorized may not exercise the access right upon the designated portion of content. For example, an access right may allow displaying a section of text in an email message only for specified recipients and not to other recipients. In a preferred embodiment, the entire email content, including restricted portions, is provided to all recipients, including unauthorized recipients. Unauthorized recipients are prevented from exercising the access right even though the restricted portion has been received. | 12-24-2009 |
20100017362 | SIMPLIFYING ACCESS TO DOCUMENTS ACCESSED RECENTLY IN A REMOTE SYSTEM - Simplifying access to documents accessed recently on a remote system. In one embodiment, the list of documents accessed by a user using a first instance of an application in a first/remote system is maintained. The list of documents is provided/displayed to the same user when using a second instance of the same application on a second/local system, thereby facilitating the user to access the documents accessed recently on the remote system. | 01-21-2010 |
20100325294 | ENFORCING COMMUNICATION SECURITY FOR SELECTED RESOURCES - A secure resource enforcer is configured to identify and provide selected secure resources. The secure resource enforcer includes a determining module configured to determine whether a resource of a web page that is requested in a first request by a client computer requires a secure connection based on a type of the resource that is requested. The secure resource enforcer also includes a redirecting module configured to redirect the client computer to a secure socket for the resource when the resource requires the secure connection. The secure resource enforcer further includes a receiving module configured to receive a second request from the client for the resource over the secure socket and a secure resource providing module configured to provide the requested resource to the client over the secure socket. | 12-23-2010 |
20110004689 | ACCESS OF ELEMENTS FOR A SECURE WEB PAGE THROUGH A NON-SECURE CHANNEL - Particular embodiments generally relate to allowing access of non-secure elements through a non-secure channel when a top-level page was accessed through a secure connection. In one embodiment, a webpage is accessed over a secure channel. The webpage includes secure and non-secure elements. When a non-secure element for the webpage is determined, a client may message with the server to open a non-secure channel for accessing the non-secure element. For example, the client may request port information in the request. The server then can respond with port information for a non-secure channel. The client then accesses data for the non-secure element through the non-secure channel using the port information. | 01-06-2011 |
20110060790 | FACILITATING A SERVICE APPLICATION TO OPERATE WITH DIFFERENT SERVICE FRAMEWORKS IN APPLICATION SERVERS - An aspect of the present invention facilitates a service application to operate with different frameworks executing in application servers. In one embodiment, the different interfaces according to which the different frameworks are designed to operate with service application are identified, including the interface implemented by the service application. Wrapper modules are then generated based on the differences between the identified interfaces and the interface implemented by the service application. The generated wrapper modules are then deployed along with the service application to facilitate the service application to operate with different frameworks. | 03-10-2011 |
20110166952 | FACILITATING DYNAMIC CONSTRUCTION OF CLOUDS - In an embodiment, a customer sends a set of requirements for a cloud to a cloud complier, which identifies vendors matching the set of requirements. Information on the matching set of vendors is provided to the customer, thereby enabling the customer to select desired vendors for constructing the cloud. | 07-07-2011 |
20120072703 | SPLIT PATH MULTIPLY ACCUMULATE UNIT - In one embodiment, a processor includes a multiply-accumulate (MAC) unit having a first path to handle execution of an instruction if a difference between at least a portion of first and second operands and a third operand is less than a threshold value, and a second path to handle the instruction execution if the difference is greater than the threshold value. Based on the difference, at least part of the third operand is to be provided to a multiplier of the MAC unit or to a compressor of the second path. Other embodiments are described and claimed. | 03-22-2012 |
20120137000 | CHANNEL MANAGER FOR ACCESSING ELEMENTS FOR A SECURE WEB PAGE THROUGH A NON-SECURE CHANNEL - Particular embodiments generally relate to allowing access of non-secure elements through a non-secure channel when a top-level page was accessed through a secure connection. In one embodiment, a webpage is accessed over a secure channel. The webpage includes secure and non-secure elements. When a non-secure element for the webpage is determined, a client may message with the server to open a non-secure channel for accessing the non-secure element. For example, the client may request port information in the request. The server then can respond with port information for a non-secure channel. The client then accesses data for the non-secure element through the non-secure channel using the port information. | 05-31-2012 |
20130191392 | ADVANCED SUMMARIZATION BASED ON INTENTS - A method for summarizing content using weighted Formal Concept Analysis (wFCA) is provided. The method includes (i) identifying, by a processor, one or more keywords in the content based on parts of speech, (ii) disambiguating, by the processor, at least one ambiguous keyword from the one or more keywords using the wFCA, (iii) identifying, by the processor, an association between the one or more keywords and at least one sentence in the content, and (iv) generating, by the processor, a summary of the content based on the association. | 07-25-2013 |
20130191735 | ADVANCED SUMMARIZATION ON A PLURALITY OF SENTIMENTS BASED ON INTENTS - A method of summarizing content around a sentiment using weighted Formal Concept Analysis (wFCA) is provided. The method includes identifying one or more sentences associated with the content based on parts of speech, identifying, at least one sentiment associated with the one or more sentences based on the parts of speech, identifying one or more keywords in the one or more sentences, disambiguating at least one ambiguous keyword from the one or more keywords using the wFCA, computing a weight for each sentence of the one or more sentences based on a number of keywords of the one or more keywords associated with each sentence, processing, an input including an indication of the sentiment, and generating a summary on the content around the sentiment based on (i) the weight, and b) at least one of i) the at least one sentiment, and ii) the indication. | 07-25-2013 |
20130198126 | SYSTEM AND METHOD FOR PRIORITIZING RESUMES BASED ON A JOB DESCRIPTION - A method for prioritizing one or more of resumes based on a job description is provided. The method includes (i) processing the job description to extract one or more keywords and a first period, (ii) extracting, from a first resume and a second resume of the one or more resumes, one or more section, one or more events, a first date range, and a second date range, (iii) obtaining a second period and a third period, (iv) comparing, is the first resume and the second resume, the one or more keywords with the one or more events and the first period with the third period to obtain a relevant event and a relevant section, (v) computing a first weight for the first resume and a second weight for the second resume, and (vi) prioritizing the first resume and the second resume based on the first weight and the second weight. | 08-01-2013 |
20130198195 | SYSTEM AND METHOD FOR IDENTIFYING ONE OR MORE RESUMES BASED ON A SEARCH QUERY USING WEIGHTED FORMAL CONCEPT ANALYSIS - A system for identifying one or more resumes from a set of resumes matches a search query using a resume identifying tool is provided. The system includes a memory unit that stores a database and a set of modules, a display unit, and a processor. The set of modules includes (a) a keyword extraction module extracts at least one keyword from the search query, (b) a disambiguation module disambiguates the at least one keyword based on weighted formal concept analysis, and (c) and a resume identification module identifies the one or more resumes by matching (i) the at least one keyword associated with the search query, and (ii) at least one category associated with the at least one keyword with (i) at least one disambiguated keyword associated with each resume of the set of resumes, and (ii) at least one category associated with the at least one disambiguated keyword. | 08-01-2013 |
20130198599 | SYSTEM AND METHOD FOR ANALYZING A RESUME AND DISPLAYING A SUMMARY OF THE RESUME - A computer implemented method for generating a summary of one or more resume from one or more of resumes to analyze insights of the one or more resume is provided. The computer implemented method includes (i) processing a first input includes a first indication to select a first resume from one or more of resumes, (ii) extracting, from the first resume, a first information, (iii) obtaining, from the first resume, a second information, (iv) generating a first table based on the first information and the second information, and (v) generating a first summary based on the first table, the first summary indicates a first correlation between (i) the one or more events associated with the first section and (ii) the one or more events associated with the second section over years. | 08-01-2013 |
20130218671 | SYSTEM AND METHOD FOR SELECTION AND DELIVERY OF A TARGETED ADVERTISEMENT TO A SHOPPING CART - A method of selecting and displaying of a targeted advertisement at a shopping cart is provided. The method includes (a) processing, by a processor, a product identifier received; from the shopping cart when a first product added to the shopping cart (b) obtaining one or more price indicators that correspond to the first product in the shopping cart, (c) selecting one or more advertisements that correspond to the first product based on (i) the product identifier, and fit) the one or more price indicators, and (d) delivering or displaying the one or more advertisements at the shopping cart. The product identifier is unique and specific to the first product. | 08-22-2013 |
20140368524 | ONLINE LEARNING BASED ALGORITHMS TO INCREASE RETENTION AND REUSE OF GPU-GENERATED DYNAMIC SURFACES IN OUTER-LEVEL CACHES - Some implementations disclosed herein provide techniques for caching memory data and for managing cache retention. Different cache retention policies may be applied to different cached data streams such as those of a graphics processing unit. Actual performance of the cache with respect to the data streams may be observed, and the cache retention policies may be varied based on the observed actual performance. | 12-18-2014 |
Patent application number | Description | Published |
20110106935 | POWER MANAGEMENT FOR IDLE SYSTEM IN CLUSTERS - Clusters of systems employed to increase computation capacity for specific services like the web or protocols such as the file transfer protocol. Broadly contemplated herein is an arrangement involving a set of compute nodes that perform the actual task and load balancer systems that monitor and distribute work among the compute nodes, taking into account the current load and remaining compute capacity available in each of the nodes. Power saving techniques can be applied to nodes in the cluster that are not actively running the workload due to lower utilization of the total cluster capacity. | 05-05-2011 |
20110131425 | SYSTEMS AND METHODS FOR POWER MANAGEMENT IN A HIGH PERFORMANCE COMPUTING (HPC) CLUSTER - Embodiments of the invention broadly contemplate systems, methods, apparatuses and program products providing a power management technique for an HPC cluster with performance improvements for parallel applications. According to various embodiments of the invention, power usage of an HPC cluster is reduced by boosting the performance of one or more select nodes within the cluster so that the one or more nodes take less time to complete. Embodiments of the invention accomplish this by selectively identifying the appropriate node(s) (or core(s) within the appropriate node(s)) in the cluster and increasing the computing capacity of the selected node(s) (or core(s) within the appropriate node(s)). | 06-02-2011 |
20120124269 | Organizing Memory for Effective Memory Power Management - A kernel of the operating system reorganizes a plurality of memory units into a plurality of virtual nodes in a virtual non-uniform memory access architecture in response to receiving a configuration of the plurality of memory units from a firmware. A subsystem of the operating system determines an order of allocation of the plurality of virtual nodes calculated to maintain a maximum number of the plurality of memory units devoid of references. The memory controller transitions one or more memory units into a lower power state in response to the one or more memory units being devoid of one or more references for the period of time. | 05-17-2012 |
20120180061 | Organizing Task Placement Based On Workload Characterizations - Task placement is influenced within a multiple processor computer. Tasks are classified as either memory bound or CPU bound by observing certain performance counters over the task execution. During a first pass of task load balance, tasks are balanced across various CPUs to achieve a fairness goal, where tasks are allocated CPU resources in accordance to their established fairness priority value. During a second pass of task load balance, tasks are rebalanced across CPUs to reduce CPU resource contention, such that the rebalance of tasks in the second pass does not violate fairness goals established in the first pass. In one embodiment, the second pass could involve re-balancing memory bound tasks across different cache domains, where CPUs in a cache domain share a same last mile CPU cache such as an L3 cache. In another embodiment, the second pass could involve re-balancing CPU bound tasks across different CPU domains of a cache domain, where CPUs in a CPU domain could be sharing some or all of CPU execution unit resources. The two passes could be executed at different frequencies. | 07-12-2012 |
20130124826 | Optimizing System Throughput By Automatically Altering Thread Co-Execution Based On Operating System Directives - A technique for optimizing program instruction execution throughput in a central processing unit core (CPU). The CPU implements a simultaneous multithreading (SMT) operational mode wherein program instructions associated with at least two software threads are executed in parallel as hardware threads while sharing one or more hardware resources used by the CPU, such as cache memory, translation lookaside buffers, functional execution units, etc. As part of the SMT mode, the CPU implements an autothread (AT) operational mode. During the AT operational mode, a determination is made whether there is a resource conflict between the hardware threads that undermines instruction execution throughput. If a resource conflict is detected, the CPU adjusts the relative instruction execution rates of the hardware threads based on relative priorities of the software threads. | 05-16-2013 |
20130339200 | Fair Distribution Of Power Savings Benefit Among Customers In A Computing Cloud - A technique for fairly distributing power savings benefits to virtual machines (VMs) provisioned to customers in a computing cloud. One or more VMs are provisioned on a target cloud host in response to resource requests from one or more customer devices. Host power savings on the target host are monitored. The host power savings are used as a variable component in determining per-customer cloud usage for accounting purposes. The host power savings may be reflected as power related cost savings in a generated cloud usage calculation result that may be distributed proportionately to the VMs based on VM size and utilization. VMs of relatively larger size and lower utilization may receive a higher percentage of the cost savings than VMs of relatively smaller size and higher utilization. | 12-19-2013 |
20130339201 | Fair Distribution Of Power Savings Benefit Among Customers In A Computing Cloud - A technique for fairly distributing power savings benefits to virtual machines (VMs) provisioned to customers in a computing cloud. One or more VMs are provisioned on a target cloud host in response to resource requests from one or more customer devices. Host power savings on the target host are monitored. The host power savings are used as a variable component in determining per-customer cloud usage for accounting purposes. The host power savings may be reflected as power related cost savings in a generated cloud usage calculation result that may be distributed proportionately to the VMs based on VM size and utilization. VMs of relatively larger size and lower utilization may receive a higher percentage of the cost savings than VMs of relatively smaller size and higher utilization. | 12-19-2013 |
20140089637 | Optimizing System Throughput By Automatically Altering Thread Co-Execution Based On Operating System Directives - A technique for optimizing program instruction execution throughput in a central processing unit core (CPU). The CPU implements a simultaneous multithreading (SMT) operational mode wherein program instructions associated with at least two software threads are executed in parallel as hardware threads while sharing one or more hardware resources used by the CPU, such as cache memory, translation lookaside buffers, functional execution units, etc. As part of the SMT mode, the CPU implements an autothread (AT) operational mode. During the AT operational mode, a determination is made whether there is a resource conflict between the hardware threads that undermines instruction execution throughput. If a resource conflict is detected, the CPU adjusts the relative instruction execution rates of the hardware threads based on relative priorities of the software threads. | 03-27-2014 |
20140115225 | CACHE MANAGEMENT BASED ON PHYSICAL MEMORY DEVICE CHARACTERISTICS - A processor unit removes, responsive to obtaining a new address, an entry from a memory of a type of memory based on a comparison of a performance of the type of memory to different performances, each of the different performances associated with a number of other types of memory. | 04-24-2014 |
20140115226 | CACHE MANAGEMENT BASED ON PHYSICAL MEMORY DEVICE CHARACTERISTICS - A processor unit removes, responsive to obtaining a new address, an entry from a memory of a type of memory based on a comparison of a performance of the type of memory to different performances, each of the different performances associated with a number of other types of memory. | 04-24-2014 |
20140136864 | MANAGEMENT TO REDUCE POWER CONSUMPTION IN VIRTUAL MEMORY PROVIDED BY PLURALITY OF DIFFERENT TYPES OF MEMORY DEVICES - Reduction of memory power consumption in virtual memory systems that have a combination of different types of physical memory devices working together in a system's primary memory to achieve performance with optimum reductions in power consumption by storing in the virtual memory in the kernel, topology data for each of the different memory devices used. | 05-15-2014 |
20140137105 | VIRTUAL MEMORY MANAGEMENT TO REDUCE POWER CONSUMPTION IN THE MEMORY - Reducing virtual memory power consumption during idle states in virtual memory systems comprising tracking the topology of the system memory by the system hypervisor and operating system running on any selected virtual machine hosted by the system hypervisor. The idle states in the system memory are dynamically monitored and then the power consumption states in the system memory are dynamically reduced through the interaction of the hypervisor and the operation system running on the selected virtual machine. | 05-15-2014 |
20150186171 | PLACEMENT OF INPUT / OUTPUT ADAPTER CARDS IN A SERVER - Tracking data transfers in an input/output adapter card system to determine whether the adapter cards are well-placed with respect to the components (for example dynamic random access memories) with which the adapter cards respectively are observed to communicate data. Some embodiments use a heuristic value for each adapter card in the system based on inter node transfers and intra node transfers, which are separately weighted and summed over some predetermined time interval in order to obtain the heuristic value. | 07-02-2015 |
20150186323 | PLACEMENT OF INPUT / OUTPUT ADAPTER CARDS IN A SERVER - Tracking data transfers in an input/output adapter card system to determine whether the adapter cards are well-placed with respect to the components (for example dynamic random access memories) with which the adapter cards respectively are observed to communicate data. Some embodiments use a heuristic value for each adapter card in the system based on inter node transfers and intra node transfers, which are separately weighted and summed over some predetermined time interval in order to obtain the heuristic value. | 07-02-2015 |
20150309947 | TRACKING STATISTICS CORRESPONDING TO DATA ACCESS IN A COMPUTER SYSTEM - Embodiments of the present invention disclose a method, computer program product, and system for determining statistics corresponding to data transfer operations. In one embodiment, the computer implemented method includes the steps of receiving a request from an input/output (I/O) device to perform a data transfer operation between the I/O device and a memory, generating an entry in an input/output memory management unit (IOMMU) corresponding to the data transfer operation, wherein the entry in the IOMMU includes at least an indication of a processor chip that corresponds to the memory of the data transfer operation, monitoring the data transfer operation between the I/O device and the memory, determining statistics corresponding to the monitored data transfer operation, wherein the determined statistics include at least: the I/O device that performed the data transfer operation, the processor chip that corresponds to the memory of the data transfer operation, and an amount of data transferred. | 10-29-2015 |
20150309948 | TRACKING STATISTICS CORRESPONDING TO DATA ACCESS IN A COMPUTER SYSTEM - Embodiments of the present invention disclose a method, computer program product, and system for determining statistics corresponding to data transfer operations. In one embodiment, the computer implemented method includes the steps of receiving a request from an input/output (I/O) device to perform a data transfer operation between the I/O device and a memory, generating an entry in an input/output memory management unit (IOMMU) corresponding to the data transfer operation, wherein the entry in the IOMMU includes at least an indication of a processor chip that corresponds to the memory of the data transfer operation, monitoring the data transfer operation between the I/O device and the memory, determining statistics corresponding to the monitored data transfer operation, wherein the determined statistics include at least: the I/O device that performed the data transfer operation, the processor chip that corresponds to the memory of the data transfer operation, and an amount of data transferred. | 10-29-2015 |
Patent application number | Description | Published |
20090036578 | Polyester Compositions, Method Of Manufacture, And Uses Thereof - A composition is described, comprising: (a) from 20 to 80 wt % of a polyester; (b) from 5 to 35 wt % of a flame retardant phosphinate of the formula (I) | 02-05-2009 |
20090088504 | HIGH HEAT POLYCARBONATES, METHODS OF MAKING, AND aRTICLES FORMED THEREFROM - Disclosed herein is a polymer blend comprising a first polycarbonate comprising a first structural unit derived from a 2-aryl-3,3-bis(4-hydroxyaryl)phthalimidine and a second structural unit derived from a dihydroxy aromatic compound, wherein the second structural unit is not identical to the first structural unit, and a second polycarbonate comprising a structural unit derived from a dihydroxy aromatic compound, wherein the polymer blend has a glass transition temperature of 155 to 200° C.; and wherein a test auricle having a thickness of 3.2 mm and molded from the blend has a haze of less than or equal to 3.0 measured in accordance with ASTM D1003-00. A method of making the polymer blend, and articles prepared from the blend, are also disclosed. | 04-02-2009 |
20110152471 | METHODS FOR THE PREPARATION OF A POLY(ARYLENE ETHER) POLYSILOXANE MULTIBLOCK COPOLYMER, MULTIBLOCK COPOLYMERS PRODUCED THEREBY, AND ASSOCIATED COMPOSITIONS AND ARTICLES - A poly(arylene ether)-polysiloxane multiblock copolymer is prepared by the reaction of a hydroxy-diterminated poly(arylene ether), a hydroxyaryl-diterminated polysiloxane, and an aromatic diacid chloride. This synthesis overcomes disadvantages of known syntheses of poly(arylene ether)-polysiloxane block copolymers. The poly(arylene ether)-polysiloxane multiblock copolymer is useful for improving the melt processibility of poly(arylene ether) compositions. | 06-23-2011 |
20110288214 | POLYSILOXANE-POLYCARBONATE COMPOSITIONS, METHOD OF MANUFACTURE, AND ARTICLES FORMED THEREFROM - A composition, comprising, based on the total weight of the polymer components in the composition, 1 to 40 wt. % of an aromatic polycarbonate, 30 to 98.8 wt. % of a polysiloxane-polycarbonate block copolymer, and 0.1 to 10 wt. % of a polysiloxane-polyimide block copolymer comprising more than 20 wt. % polysiloxane blocks, based on the total weight of the polysiloxane-polyimide copolymer. The compositions provide articles with low haze, high luminous transmittance, and good hydro-aging properties. The articles can further be formulated to have excellent flame retardance, particularly when KSS is used. | 11-24-2011 |
Patent application number | Description | Published |
20110299479 | Method and access point for allocating whitespace spectrum - Disclosed is a method and apparatus for allocating a whitespace spectrum associated with a plurality of access points. The method includes reporting a signal strength associated with each of a plurality of other access points to a central controller. The method includes aggregating a plurality of demands to produce an aggregate demand. Each of the plurality of demands is associated with one of a plurality of users. The method includes reporting the aggregated demand to the central controller. The method includes associating one of the plurality of users with the access point based on a user setting or associating one of the plurality of users based on an allocation by the central controller. The allocation from the central controller is based on the signal strength and the aggregated demand. The allocation indicates a frequency band from the set of frequency bands to be allocated by the access point to the user. | 12-08-2011 |
20110300891 | Method and controller for allocating whitespace spectrum - Disclosed is a method and apparatus for allocating a set of frequency bands to a plurality of access points. The method includes generating an interference map associated with the set of frequency bands and associated with the plurality of access points. The method includes aggregating a plurality of demands to produce an aggregate demand. Each of the plurality of demands associated with one of the plurality of access points. The method includes dynamically allocating the set of frequency bands to each of the plurality of access points based on the interference map and the aggregate demand. | 12-08-2011 |
20120147822 | SYSTEM AND METHOD FOR PROPORTIONAL RESOURCE ALLOCATION FOR MULTI-RATE RANDOM ACCESS - The present invention relates to a system and method for proportional-fair resource allocation for multi-rate random access. The method includes receiving, by a device, data packets to be transmitted to an access point on a shared uplink channel, and determining, by the device, whether or not to contend for access to the shared uplink channel based on a probability of access. The probability of access is based on a data transmission rate between the device and the access point. | 06-14-2012 |
20120173474 | METHOD AND SYSTEM FOR PREDICTING TRAVEL TIME BACKGROUND - A method and system is provided for predicting at a current time “t”, a time that may be taken to travel between plurality of locations, at a future time-point “t+τ”. The method includes determining deterministic component “μ | 07-05-2012 |
20140089250 | METHOD AND SYSTEM FOR PREDICTING TRAVEL TIME BACKGROUND - A method and system is provided for predicting at a current time “t”, a time that may be taken to travel between plurality of locations, at a future time-point “t+τ”. The method includes determining deterministic component “μ | 03-27-2014 |
20140112275 | QOS AWARE MULTI RADIO ACCESS POINT FOR OPERATION IN TV WHITESPACES - QoS aware multi radio access point for operation in TV whitespaces is disclosed. The present invention relates to operation of access points and, particularly, to operation of access points in TV whitespaces. The AP is configured to intelligently choose the radios, determine available whitespaces in the spectrum and allocate radios to the available whitespaces in the spectrum. The method determines clients that need to be serviced by the AP and assigns each client associated with AP to one of the radios. In addition, the method also takes care of QoS requirements for different services and hence every service is addressed to satisfy its QoS requirements. The method ensures that there is maximum utilization of available whitespace spectrum by accounting for the spectrum specific characteristics. The method considers bands for operation are spread across the spectrum and allocates the clients based on the availability of bands throughout the spectrum. | 04-24-2014 |
20140153503 | SYSTEMS AND METHODS FOR BEACONING AND MANAGEMENT IN BANDWIDTH ADAPTIVE WIRELESS NETWORKS - Systems and methods for beaconing and management in bandwidth adaptive wireless networks are disclosed. The present invention relates to bandwidth allocation for access points and, more particularly, to bandwidth allocation for access points in wireless networks. The method employs mechanisms for beaconing and management to associate clients with Access Point (AP). The beaconing mechanism allows the client to discover part of the spectrum over which an AP operates efficiently. Periodic beacon messages are sent by AP to the client over the bandwidth of operation of the channel. The client then sends a client association request and gets associated with the AP. Further, critical information is conveyed to the AP in the beacon message. The AP further allocates the client to one of its radios of operation. The system is configured to handle disruptions and switching the AP to different parts of the spectrum during such disruptions. | 06-05-2014 |
20140342700 | SYSTEM AND METHOD FOR SEAMLESS SWITCHING BETWEEN OPERATOR NETWORKS - System and method for seamless switching between operator networks is disclosed. The present invention relates to communication networks and, more particularly, to switching between operators in communication networks. A network element termed as the service aggregator is provided that resides in the operator's network and acts as an intermediate between the mobile user and the operator. In addition, a switching module is provided on the mobile device of the user that interacts with the service aggregator to perform switching. Base station continuously broadcasts signaling information to the mobile device. Based on such signaling information received, the mobile device decides if it wishes to switch to another operator's network. The service aggregator establishes connection with the service gateway of the new network. Further, service aggregator sends handover signal and the mobile device switches seamlessly to the new operator's network. | 11-20-2014 |