Patent application number | Description | Published |
20100332284 | Internet Package Shipping Systems and Methods - Systems and methods for shipping a package from a package sender to an intended recipient, utilizing Internet communications to place shipping orders, request on demand package pickup, maintain and utilize pre-stored profile information, view shipping history, track orders, etc. A package sender with an Internet-accessible computer accesses an Internet site and associated shipping system operated by a shipping service provider. The package sender enters information required for shipping the package, including shipping options and methods for payment, and the shipment transaction is validated. If the transaction is validated, printer indicia are communicated to the customer's computer, which is enabled to locally print a prepaid label containing special machine-readable as well as human-readable indicia. The shipping service provider acquires the package by drop-off, standard pickup or on call pickup, scans the machine readable indicia, verifies other indicia of authenticity, and processes the package in accordance with information encoded on the label. | 12-30-2010 |
20130124402 | INTERNET PACKAGE SHIPPING SYSTEMS AND METHODS - Systems and methods for shipping a package from a package sender to an intended recipient, utilizing Internet communications to place shipping orders, request on demand package pickup, maintain and utilize pre-stored profile information, view shipping history, track orders, etc. A package sender with an Internet-accessible computer accesses an Internet site and associated shipping system operated by a shipping service provider. The package sender enters information required for shipping the package, including shipping options and methods for payment, and the shipment transaction is validated. If the transaction is validated, printer indicia are communicated to the customer's computer, which is enabled to locally print a prepaid label containing special machine-readable as well as human-readable indicia. The shipping service provider acquires the package by drop-off, standard pickup or on call pickup, scans the machine readable indicia, verifies other indicia of authenticity, and processes the package in accordance with information encoded on the label. | 05-16-2013 |
Patent application number | Description | Published |
20110314211 | RECOVER STORE DATA MERGING - Various embodiments of the present invention merge data in a cache memory. In one embodiment a set of store data is received from a processing core. A store merge command and a merge mask from are also received from the processing core. A portion of the store data to perform a merging operation thereon is identified based on the store merge command. A sub-portion of the portion of the store data to be merged with a corresponding set of data from a cache memory is identified based on the merge mask. The sub-portion is merged with the corresponding set of data from the cache memory. | 12-22-2011 |
20110314212 | MANAGING IN-LINE STORE THROUGHPUT REDUCTION - Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred. | 12-22-2011 |
20110320695 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 12-29-2011 |
20110320727 | DYNAMIC CACHE QUEUE ALLOCATION BASED ON DESTINATION AVAILABILITY - An apparatus for controlling operation of a cache includes a first command queue, a second command queue and an input controller configured to receive requests having a first command type and a second command type and to assign a first request having the first command type to the first command queue and a second command having the first command type to the second command queue in the event that the first command queue has not received an indication that a first dedicated buffer is available. | 12-29-2011 |
20110320779 | PERFORMANCE MONITORING IN A SHARED PIPELINE - A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers. | 12-29-2011 |
20120210188 | HANDLING CORRUPTED BACKGROUND DATA IN AN OUT OF ORDER EXECUTION ENVIRONMENT - Handling corrupted background data in an out of order processing environment. Modified data is stored on a byte of a word having at least one byte of background data. A byte valid vector and a byte store bit are added to the word. Parity checking is done on the word. If the word does not contain corrupted background date, the word is propagated to the next level of cache. If the word contains corrupted background data, a copy of the word is fetched from a next level of cache that is ECC protected, the byte having the modified data is extracted from the word and swapped for the corresponding byte in the word copy. The word copy is then written into the next level of cache that is ECC protected. | 08-16-2012 |
20130060997 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 03-07-2013 |
20130067169 | DYNAMIC CACHE QUEUE ALLOCATION BASED ON DESTINATION AVAILABILITY - An apparatus for controlling operation of a cache includes a first command queue, a second command queue and an input controller configured to receive requests having a first command type and a second command type and to assign a first request having the first command type to the first command queue and a second command having the first command type to the second command queue in the event that the first command queue has not received an indication that a first dedicated buffer is available. | 03-14-2013 |
20130080705 | MANAGING IN-LINE STORE THROUGHPUT REDUCTION - Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred. | 03-28-2013 |
Patent application number | Description | Published |
20110314183 | SYSTEM AND METHOD FOR MANAGING DATAFLOW IN A TEMPORARY MEMORY - A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input. | 12-22-2011 |
20110320659 | DYNAMIC MULTI-LEVEL CACHE INCLUDING RESOURCE ACCESS FAIRNESS SCHEME - An apparatus for controlling access to a resource includes a shared pipeline configured to communicate with the resource, a plurality of command queues configured to form instructions for the shared pipeline and an arbiter coupled between the shared pipeline and the plurality of command queues configured to grant access to the shared pipeline to a one of the plurality of command queues based on a first priority scheme in a first operating mode. The apparatus also includes interface logic coupled to the arbiter and configured to determine that contention for access to the resource exists among the plurality of command queues and to cause the arbiter to grant access to the shared pipeline based on a second priority scheme in second operating mode. | 12-29-2011 |
20110320731 | ON DEMAND ALLOCATION OF CACHE BUFFER SLOTS - Dynamic allocation of cache buffer slots includes receiving a request to perform an operation that requires a storage buffer slot, the storage buffer slot residing in a level of storage. The dynamic allocation of cache buffer slots also includes determining availability of the storage buffer slot for the cache index as specified by the request. Upon determining the storage buffer slot is not available, the dynamic allocation of cache buffer slots includes evicting data stored in the storage buffer slot, and reserving the storage buffer slot for data associated with the request. | 12-29-2011 |
20110320855 | ERROR DETECTION AND RECOVERY IN A SHARED PIPELINE - A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic. | 12-29-2011 |
20110320863 | DYNAMIC RE-ALLOCATION OF CACHE BUFFER SLOTS - Dynamic re-allocation of cache buffer slots includes moving data out of a reserved buffer slot upon detecting an error in the reserved buffer slot, creating a new buffer slot, and storing the data moved out of the reserved buffer slot in the new buffer slot. | 12-29-2011 |
20110321053 | MULTIPLE LEVEL LINKED LRU PRIORITY - A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource. | 12-29-2011 |
20140095839 | MONITORING PROCESSING TIME IN A SHARED PIPELINE - A pipelined processing device includes: a pipeline controller configured to receive at least one instruction associated with an operation from each of a plurality of subcontrollers, and input the at least one instruction into a pipeline; and a pipeline counter configured to receive an active time value from each of the plurality of subcontrollers, the active time value indicating at least a portion of a time taken to process the at least one instruction, the pipeline controller configured to route the active time value to a shared pipeline storage for performance analysis. | 04-03-2014 |
Patent application number | Description | Published |
20130339593 | REDUCING PENALTIES FOR CACHE ACCESSING OPERATIONS - A computer program product for reducing penalties for cache accessing operations is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes respectively associating platform registers with cache arrays, loading control information and data of a store operation to be executed with respect to one or more of the cache arrays into the one or more of the platform registers respectively associated with the one or more of the cache arrays, and, based on the one or more of the cache arrays becoming available, committing the data from the one or more of the platform registers using the control information from the same platform registers to the one or more of the cache arrays. | 12-19-2013 |
20130339606 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided and relates to associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the store operations into the platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the platform registers using the control information from the platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20130339607 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided. The computer product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the first and second store operation into the first and second platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the first and second platform registers using the control information from the first and second platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20130339701 | CROSS-PIPE SERIALIZATION FOR MULTI-PIPELINE PROCESSOR - Embodiments relate to cross-pipe serialization for a multi-pipeline computer processor. An aspect includes receiving, by a processor, the processor comprising a first pipeline, the first pipeline comprising a serialization pipeline, and a second pipeline, the second pipeline comprising a non-serialization pipeline, a request comprising a first subrequest for the first pipeline and a second subrequest for the second pipeline. Another aspect includes completing the first subrequest by the first pipeline. Another aspect includes, based on completing the first subrequest by the first pipeline, sending cross-pipe unlock signal from the first pipeline to the second pipeline. Yet another aspect includes, based on receiving the cross-pipe unlock signal by the second pipeline, completing the second subrequest by the second pipeline. | 12-19-2013 |
20140095795 | REDUCING PENALTIES FOR CACHE ACCESSING OPERATIONS - A computer program product for reducing penalties for cache accessing operations is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes respectively associating platform registers with cache arrays, loading control information and data of a store operation to be executed with respect to one or more of the cache arrays into the one or more of the platform registers respectively associated with the one or more of the cache arrays, and, based on the one or more of the cache arrays becoming available, committing the data from the one or more of the platform registers using the control information from the same platform registers to the one or more of the cache arrays. | 04-03-2014 |
20140095836 | CROSS-PIPE SERIALIZATION FOR MULTI-PIPELINE PROCESSOR - Embodiments relate to cross-pipe serialization for a multi-pipeline computer processor. An aspect includes receiving, by a processor, the processor comprising a first pipeline, the first pipeline comprising a serialization pipeline, and a second pipeline, the second pipeline comprising a non-serialization pipeline, a request comprising a first subrequest for the first pipeline and a second subrequest for the second pipeline. Another aspect includes completing the first subrequest by the first pipeline. Another aspect includes, based on completing the first subrequest by the first pipeline, sending cross-pipe unlock signal from the first pipeline to the second pipeline. Yet another aspect includes, based on receiving the cross-pipe unlock signal by the second pipeline, completing the second subrequest by the second pipeline. | 04-03-2014 |
Patent application number | Description | Published |
20100172642 | STABILIZED EQUIPMENT SUPPORT AND METHOD OF BALANCING SAME - A stabilized support for supporting motion-sensitive, ultra-lightweight, camera equipment includes a hollow platform on which the camera equipment is mounted, and a structure on which the platform is detachably mounted. The structure has a handle, a counterweight mounted below the platform, and an arm for connecting the handle with the counterweight. The platform has a plurality of interior compartments preferably arranged in generally parallel rows at opposite sides of the platform, each row extending past a center of gravity. A plurality of ballast weights is held and confined in the interior compartments within the platform to balance the support when held by the handle, or supported by optional support legs. The placement of the ballast weights is based on a balancing procedure in which the camera equipment is balanced relative to a stationary horizontal support surface. | 07-08-2010 |
20110019992 | STABILIZED MOUNT FOR, AND METHOD OF, STEADILY SUPPORTING A MOTION-SENSITIVE, IMAGE CAPTURE DEVICE - A stabilized mount stably supports a motion-sensitive, image capture device, such as a cellular telephone or a personal digital assistant, on a support surface, such as a tripod or analogous camera equipment. The device is operative for capturing an image over a field of view along an optical axis perpendicular to an image plane. The mount includes a holder for holding the device during image capture, and a fixed base integral with the holder and lying in a base plane perpendicular to the image plane when the base is supported by the support surface in a supported orientation. The base is operative for steadily positioning the holder and the device on the support surface in the supported orientation during the image capture. | 01-27-2011 |
20110069947 | WEIGHTED MOUNTING ARRANGEMENT FOR, AND METHOD OF, STEADILY SUPPORTING A MOTION-SENSITIVE, IMAGE CAPTURE DEVICE - A weighted mounting arrangement stably supports a motion-sensitive, image capture device incorporated in a cellular telephone. The arrangement includes a handheld equipoising structure having a platform and a handle connected to the platform at a handle connection, a mount on the platform for holding the device during image capture, and a ballast weight mounted on the mount with the held device as an assembly. The assembly and the equipoising structure together have a combined center of gravity positioned in close adjacent proximity below the handle connection for balancing the arrangement during image capture. | 03-24-2011 |
20110164173 | BALANCED MOUNTING ARRANGEMENT FOR, AND METHOD OF, STEADILY SUPPORTING A MOTION-SENSITIVE, IMAGE CAPTURE DEVICE - A balanced mounting arrangement stably supports a motion-sensitive, image capture device, and includes a mount for holding the device during image capture, and a handheld equipoising structure having a support platform on which the mount and the held device are mounted during image capture, a bottom counterweight below the platform, and a curved arm extending along an arcuate path between the platform and the counterweight. A weight component is mounted on, and is movable relative to and along, the curved arm to adjust a vertical balance position of the arrangement. | 07-07-2011 |
20110170851 | STABILIZED EQUIPMENT SUPPORT AND METHOD OF BALANCING SAME - A stabilized support for supporting motion-sensitive, ultra-lightweight, camera equipment includes a hollow platform on which the camera equipment is mounted, and a structure on which the platform is detachably mounted. The structure has a handle, a counterweight mounted below the platform, and an arm for connecting the handle with the counterweight. The platform has a plurality of interior compartments preferably arranged in generally parallel rows at opposite sides of the platform, each row extending past a center of gravity. A plurality of ballast weights is held and confined in the interior compartments within the platform to balance the support when held by the handle, or supported by optional support legs. The placement of the ballast weights is based on a balancing procedure in which the camera equipment is balanced relative to a stationary horizontal support surface. | 07-14-2011 |
Patent application number | Description | Published |
20100266272 | Folding Camera Support with Rotational Inertia Adjustment - A folding, adjustable camera support having a central post secured in one or more central post holders. A first camera equipment support component having a distal end and a proximate end is adjustably attached at its proximate end to one of the one or more central post holders, and is configured to adjust between an operative position and a folded position via an adjustment mechanism. The distance of the camera equipment is adjustable radially from the central post. A second camera equipment support component is similarly configured and adjustable with respect to the central post, and can balance the first camera equipment component. | 10-21-2010 |
20120002062 | MODULAR AND INTEGRATED EQUIPMENT STABILIZING SUPPORT APPARATUSES - An image-capture device stabilizer comprising a plurality of frictionally engaged components. The components are, or are configurable into, parts such as an image-capture device platform, a gimbal apparatus, and one or more balancing arms. The image-capture device stabilizer is pre-balanced for a specific image-capture device, and can be constructed to be suitable with an image-capture device weighing less than 1 lb. The image-capture device stabilizer can be provided in a kit. The invention also includes a method of fabricating an image-capture device stabilizer. | 01-05-2012 |
20140147104 | CAMERA STABILIZER - A camera stabilizer having a camera mount for attaching and positioning a camera, a gimbal component disposed below the camera mount and positioned at or near the center of gravity and a balancing arm. A handle is offset from a line through the center of gravity of the stabilizer plus camera. Adjustable weights are provided in a balancing arm to vary the center of gravity location. | 05-29-2014 |
Patent application number | Description | Published |
20110097581 | IN-FIBER FILAMENT PRODUCTION - In a fiber there is provided a fiber matrix material having a fiber length; and an array of isolated in-fiber filaments that extend the fiber length. The in-fiber filaments are disposed at a radius in a cross section of the fiber that is a location of a continuous filament material layer in a drawing preform of the fiber. As a result, there is provided a fiber matrix material having a fiber length; and a plurality of isolated fiber elements that are disposed in the fiber matrix, extending the fiber length, where the plurality is of a number greater than a number of isolated domains in a drawing preform of the fiber. | 04-28-2011 |
20120267820 | FIBER DRAW SYNTHESIS - Fiber draw synthesis process. The process includes arranging reactants in the solid state in proximate domains within a fiber preform. The preform is fluidized at a temperature below the melting temperature of the reactants. The fluidized preform is drawn into a fiber thereby bringing the reagents in the proximate domains into intimate contact with one another resulting in a chemical reaction between the reactants thereby synthesizing a compound within the fiber. The reactants may be dissolved or mixed in a host material within the preform. In a preferred embodiment, the reactants are selenium and zinc. | 10-25-2012 |
20140099849 | PIGMENT PASTE COMPOSITION - A pigment paste composition includes a) a flame retardant including a combination of antimony oxide, zinc borate, and zinc sulfide; b) a coloring agent; c) a solvent comprising a plasticizer; and d) a wetting and dispersing agent. Further included is a fabric including at least one fiber coated with the aforementioned pigment paste composition dispersed within a polymer base. | 04-10-2014 |
Patent application number | Description | Published |
20090009363 | METHODS, DATA STRUCTURES, AND SYSTEMS TO CONFIGURE AVIONIC EQUIPMENT WITH PROFILE DATA - Methods, data structures, and systems are provided for configuring avionic equipment with profile data. Profile data is defined, stored, and/or retrieved. The profile data is used to configure one or more display fields of the avionic equipment, display units of measure, identify flight plans, define map settings, set navigation fields, define communication transceiver spacing, define data/time setup, to configure one or more timers, alarms, and/or to configure other communication, navigation, or surveillance settings associated with avionic equipment. Furthermore, instances of the profile data are associated with unique identifiers for storage and retrieval purposes. | 01-08-2009 |