27th week of 2014 patent applcation highlights part 73 |
Patent application number | Title | Published |
20140189246 | MEASURING APPLICATIONS LOADED IN SECURE ENCLAVES AT RUNTIME - Embodiments of an invention for measuring applications loaded in secure enclaves at runtime are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive an instruction to extend a first measurement of a secure enclave with a second measurement. The execution unit is to execute the instruction after initialization of the secure enclave. | 2014-07-03 |
20140189247 | APPARATUS AND METHOD FOR IMPLEMENTING A SCRATCHPAD MEMORY - An apparatus and method for implementing a scratchpad memory within a cache using priority hints. For example, a method according to one embodiment comprises: providing a priority hint for a scratchpad memory implemented using a portion of a cache; determining a page replacement priority based on the priority hint; storing the page replacement priority in a page table entry (PTE) associated with the page; and using the page replacement priority to determine whether to evict one or more cache lines associated with the scratchpad memory from the cache. | 2014-07-03 |
20140189248 | EFFICIENT ONLINE CONSTRUCTION OF MISS RATE CURVES - Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced. | 2014-07-03 |
20140189249 | Software and Hardware Coordinated Prefetch - Included is an apparatus comprising a processor configured to identify a code segment in a program, analyze the code segment to determine a memory access pattern, if the memory access pattern is regular, turn on hardware prefetching for the code segment by setting a control register before the code segment, and turn off the hardware prefetching by resetting the control register after the code segment. Also included is a method comprising identifying a code segment in a program, analyzing the code segment to determine a memory access pattern, if the memory access pattern is regular, turning on hardware prefetching for the code segment by setting a control register before the code segment, and turning off the hardware prefetching by resetting the control register after the code segment. | 2014-07-03 |
20140189250 | Store Forwarding for Data Caches - A bit or other vector may be used to identify whether an address range entered into an intermediate buffer corresponds to most recently updated data associated with the address range. A bit or other vector may also be used to identify whether an address range entered into an intermediate buffer overlaps with an address range of data that is to be loaded. A processing device may then determine whether to obtain data that is to be loaded entirely from a cache, entirely from an intermediate buffer which temporarily buffers data destined for a cache until the cache is ready to accept the data, or from both the cache and the intermediate buffer depending on the particular vector settings. Systems, devices, methods, and computer readable media are provided. | 2014-07-03 |
20140189251 | UPDATE MASK FOR HANDLING INTERACTION BETWEEN FILLS AND UPDATES - A multi core processor implements a cash coherency protocol in which probe messages are address-ordered on a probe channel while responses are un-ordered on a response channel. When a first core generates a read of an address that misses in the first core's cache, a line fill is initiated. If a second core is writing the same address, the second core generates an update on the addressed ordered probe channel. The second core's update may arrive before or after the first core's line fill returns. If the update arrived before the fill returned, a mask is maintained to indicate which portions of the line were modified by the update so that the late arriving line fill only modifies portions of the line that were unaffected by the earlier-arriving update. | 2014-07-03 |
20140189252 | DYNAMIC CACHE WRITE POLICY - A system, processor and method to monitor specific cache events and behavior based on established principles of quantized architectural vulnerability factor (AVF) through the use of a dynamic cache write policy controller. The output of the controller is then used to set the write back or write through mode policy for any given cache. This method can be used to change cache modes dynamically and does not require the system to be rebooted. The dynamic nature of the controller provides the capability of intelligently switching from reliability to performance mode and back as needed. This method eliminates the residency time of dirty lines in a cache, which increases soft errors (SER) resiliency of protected caches in the system and reduces detectable unrecoverable errors (DUE), while keeping implementation cost of hardware at a minimum. | 2014-07-03 |
20140189253 | CACHE COHERENCY AND PROCESSOR CONSISTENCY - Responsive to execution of a computer instruction in a current translation window, state indicators associated with a cache line accessed for the execution may be modified. The state indicators may include: a first indicator to indicate whether the computer instruction is a load instruction moved from a subsequent translation window into the current translation window, a second indicator to indicate whether the cache line is modified in a cache responsive to the execution of the computer instruction, a third indicator to indicate whether the cache line is speculatively modified in the cache responsive to the execution of the computer instruction, a fourth indicator to indicate whether the cache line is speculatively loaded by the computer instruction, a fifth indicator to indicate whether a core executing the computer instruction exclusively owns the cache line, and a sixth indicator to indicate whether the cache line is invalid. | 2014-07-03 |
20140189254 | Snoop Filter Having Centralized Translation Circuitry and Shadow Tag Array - A processor is described that includes a plurality of processing cores. The processor includes an interconnection network coupled to each of said processing cores. The processor includes snoop filter logic circuitry coupled to the interconnection network and associated with coherence plane logic circuitry of the processor. The snoop filter logic circuitry contains circuitry to hold information that identifies not only which of the processing cores are caching specific cache lines that are cached by the processing cores, but also, where in respective caches of the processing cores the cache lines are cached. | 2014-07-03 |
20140189255 | METHOD AND APPARATUS TO SHARE MODIFIED DATA WITHOUT WRITE-BACK IN A SHARED-MEMORY MANY-CORE SYSTEM - A cache-coherent device may include multiple caches and a cache coherency engine, which monitors whether there are more than one versions of a cache line stored in the caches and whether the version of the cache line in the caches is consistent with the version of the cache line stored in the memory. | 2014-07-03 |
20140189256 | PROCESSOR WITH MEMORY RACE RECORDER TO RECORD THREAD INTERLEAVINGS IN MULTI-THREADED SOFTWARE - A processor includes a first core to execute a first software thread, a second core to execute a second software thread, and shared memory access monitoring and recording logic. The logic includes memory access monitor logic to monitor accesses to memory by the first thread, record memory addresses of the monitored accesses, and detect data races involving the recorded memory addresses with other threads. The logic includes chunk generation logic is to generate chunks to represent committed execution of the first thread. Each of the chunks is to include a number of instructions of the first thread executed and committed and a time stamp. The chunk generation logic is to stop generation of a current chunk in response to detection of a data race by the memory access monitor logic. A chunk buffer is to temporarily store chunks until the chunks are transferred out of the processor. | 2014-07-03 |
20140189257 | SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device includes stacked memory strings in which at least some adjacent memory strings share a common source line. During a read operation for a selected memory string, a first current path is formed from a bit line of the selected memory string to the common source line through the selected memory string. A second current path is formed from the bit line of the selected memory string, through the common source line, to a bit line of an adjacent unselected memory string. This reduced path resistance enhances device reliability in read mode. | 2014-07-03 |
20140189258 | SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device includes a memory array including memory blocks stacked in a plurality of layers over a substrate, first lines coupling word lines of memory blocks arranged in even-numbered layers, and second lines coupling word lines of memory blocks arranged in odd-numbered layers. | 2014-07-03 |
20140189259 | SEMICONDUCTOR DEVICE AND ELECTRONIC DEVICE - A semiconductor device includes a first memory controller configured to output a first control signal to first and second external memories through a first memory interface, a second memory controller configured to output a second control signal to the second external memory through a second memory interface, an inter-device interface for communicating with another semiconductor device, terminals configured to output the second control signal that has passed through the second memory interface, and a first selector configured to select between the second memory interface and the inter-device interface in accordance with an operation mode of the semiconductor device and to couple the selected interface to the terminals. | 2014-07-03 |
20140189260 | APPROACH FOR CONTEXT SWITCHING OF LOCK-BIT PROTECTED MEMORY - A streaming multiprocessor in a parallel processing subsystem processes atomic operations for multiple threads in a multi-threaded architecture. The streaming multiprocessor receives a request from a thread in a thread group to acquire access to a memory location in a lock-protected shared memory, and determines whether a address lock in a plurality of address locks is asserted, where the address lock is associated the memory location. If the address lock is asserted, then the streaming multiprocessor refuses the request. Otherwise, the streaming multiprocessor asserts the address lock, asserts a thread group lock in a plurality of thread group locks, where the thread group lock is associated with the thread group, and grants the request. One advantage of the disclosed techniques is that acquired locks are released when a thread is preempted. As a result, a preempted thread that has previously acquired a lock does not retain the lock indefinitely. | 2014-07-03 |
20140189261 | ACCESS TYPE PROTECTION OF MEMORY RESERVED FOR USE BY PROCESSOR LOGIC - A processor of an aspect includes operation mode check logic to determine whether to allow an attempted access to an operation mode and access type protected memory based on an operation mode that is to indicate whether the attempted access is by an on-die processor logic. Access type check logic is to determine whether to allow the attempted access to the operation mode and access type protected memory based on an access type of the attempted access to the operation mode and access type protected memory. Protection logic is coupled with the operation mode check logic and is coupled with the access type check logic. The protection logic is to deny the attempted access to the operation mode and access type protected memory if at least one of the operation mode check logic and the access type check logic determines not to allow the attempted access. | 2014-07-03 |
20140189262 | OPTIMIZATION OF NATIVE BUFFER ACCESSES IN JAVA APPLICATIONS ON HYBRID SYSTEMS - Managing buffers in a hybrid system, in one aspect, may comprise selecting a first buffer management method from a plurality of buffer management methods; capturing statistics associated with access to the buffer in the hybrid system running under the initial buffer management method; analyzing the captured statistics; identifying a second buffer management method based on the analyzed captured statistics; determining whether the second buffer management method is more optimal than the first buffer management method; in response to determining that the second buffer management method is more optimal than the first buffer management method, invoking the second buffer management method; and repeating the capturing, the analyzing, the identifying and the determining. | 2014-07-03 |
20140189263 | Storage Device and Method for Reallocating Storage Device Resources Based on an Estimated Fill Level of a Host Buffer - A storage device and method for reallocating storage device resources based on an estimated fill level of a host buffer are disclosed. In one embodiment, a storage device receives, from a host device, a rate at which the host device stores data in its buffer and tracks an amount of data that was received from the host device. The storage device estimates a fill level of the buffer at an elapsed time using the rate, the elapsed time, and the amount of data received from the host device over that elapsed time. If the estimated fill level of the buffer is above a threshold, the storage device increases a rate of receiving data from the host device. | 2014-07-03 |
20140189264 | Reads and Writes Between a Contiguous Data Block and Noncontiguous Sets of Logical Address Blocks in a Persistent Storage Device - In the present disclosure, a persistent storage device includes both persistent storage, which includes a set of persistent storage blocks, and a storage controller. The persistent storage device stores and retrieves data in response to commands received from an external host device. The persistent storage device stores data, from a contiguous data block, to two or more sets of logical address blocks in persistent storage. The persistent storage device also retrieves data, corresponding to a contiguous data block, from two or more sets of logical address blocks in persistent. In both instances, the two or more sets of logical address blocks in persistent storage, in aggregate, are not contiguous. | 2014-07-03 |
20140189265 | ATOMIC TIME COUNTER SYNCHRONIZATION - Methods, integrated circuit devices, and fabrication processes relating to synchronization of master and local timestamp counters (TSCs) are described. One method includes sending, to a memory bus, in response to an event that desynchronizes a master and a local TSC, a bus-lock command to perform atomic reading from a first memory location and atomic writing to a second memory location; reading a master timestamp from the master TSC via the first memory location; writing a local timestamp to the local TSC via the second memory location, to synchronize the local TSC with the master TSC; and sending, to the memory bus, a bus-unlock command; wherein the master TSC is memory mapped to the first memory location and the local TSC is memory mapped to the second memory location. | 2014-07-03 |
20140189266 | EFFICIENT READ AND WRITE OPERATIONS - Computer readable media, methods and apparatuses are disclosed that may be configured for sequentially reading data of a file stored on a storage medium. The disclosure also provides for alternating transferring of fixed size portions of the file data to a first buffer and a second buffer, alternating processing of data blocks of the fixed sized portions in parallel from the first and second buffers by a plurality of processing threads, and outputting the processed data blocks. | 2014-07-03 |
20140189267 | METHOD AND APPARATUS FOR MANAGING MEMORY SPACE - Embodiments of the present invention relate to a method, apparatus and computer product for managing memory space. In one aspect of the present invention, there is provided a method for managing memory space that is organized into pages, the pages being divided into a plurality of page sets, each page set being associated with one of a plurality of upper-layer systems, by: performing state monitoring to the plurality of upper-layer systems to assign priorities to the plurality of upper-layer systems; and determining an order of releasing the pages of the memory space based on the priorities of the plurality of upper-layer systems with the page sets as units. Other aspects and embodiments of invention are also disclosed. | 2014-07-03 |
20140189268 | HIGH READ BLOCK CLUSTERING AT DEDUPLICATION LAYER - Methods, systems, and computer program products are provided for deduplicating data mapping a plurality of file blocks of selected data to a plurality of logical blocks, deduplicating the plurality of logical blocks to thereby associate each logical block with a corresponding physical block of a plurality of physical blocks located on a physical memory device, two or more of the corresponding physical blocks being non-contiguous with each other, determining whether one or more of the corresponding physical blocks are one or more frequently accessed physical blocks being accessed at a frequency above a threshold frequency and being referred to by a common set of applications, and relocating data stored at the one or more frequently accessed physical blocks to different ones of the plurality of physical blocks, the different ones of the plurality of physical blocks being physically contiguous. | 2014-07-03 |
20140189269 | System and Method for Virtual Tape Library Over S3 - System and method embodiments are provided herein to enable VTL backup and retrieval over S3 storage technology. An embodiment method includes mapping a plurality of data blocks for VTL storage into a plurality of S3 objects for S3 storage, and storing the S3 objects at one or more locations for S3 storage over one or more networks, wherein the mapping enables stateless backup and restore of the data blocks. An embodiment network component includes a Small Computer System Interface configured to receive a plurality of data blocks form one or more servers, a data library storage including tape storage, disk storage, or both that is configured to store the data blocks, a blocks-to-objects mapping engine configured to map the data blocks into a plurality of S3 objects, and a S3 interface configured to transfer the S3 objects to one or more locations for S3 storage over one or more networks. | 2014-07-03 |
20140189270 | STORAGE SYSTEM - A storage system of the present invention includes: a data writing means for storing actual data configuring storage data into a storage device and, for every update of the content of the storage data, newly storing; and a data specifying means for specifying the latest storage data among the same storage data stored in the storage device. The data writing means is configured to store actual data configuring storage data in association with update information whose value increases by 1 for every update. The data specifying means is configured to check whether update information whose value is 2 | 2014-07-03 |
20140189271 | SYSTEM AND ELECTRONIC DEVICE FOR UTILIZING MEMORY OF VIDEO CARD - A control system for utilizing a memory of a video card of an electronic device is executed by a control unit of the electronic device. The control system includes storage space dividing module and a storage control module. The storage space dividing module divides the memory of the video to a first storage space and a second storage space according to a division proportion, the first storage space is defined to store graphics data temporarily and the second storage space is defined to store particular data. The storage control module determines a size of the second storage space, and obtains the particular data a size less than the size of the second storage space from the storage unit, and stores the particular data into the second storage space. | 2014-07-03 |
20140189272 | METHOD AND APPARATUS FOR MANAGING MEMORY - A method includes, if functional units assigned with multiple reserved areas is not driven, storing data with one of a data withdrawal condition set in the multiple reserved areas, and if the functional unit is driven, processing data stored in the one of the multiple reserved areas to restore the multiple reserved areas for driving the functional units based on the one of the data withdrawal condition set. An apparatus comprises a memory including multiple reserved areas and multiple non-reserved areas, wherein if a functional unit assigned with one of the multiple reserved areas is not driven, data is stored in the one of the multiple reserved areas with one of a data withdrawal condition set, and when the functional units is driven, data stored in in the one of the multiple reserved areas is processed to restore the one of the multiple reserved areas. | 2014-07-03 |
20140189273 | METHOD AND SYSTEM FOR FULL RESOLUTION REAL-TIME DATA LOGGING - A method and data-logging system are provided. The system includes a map-ahead thread configured to acquire blocks of private memory for storing data to be logged, the blocks of private memory being twice as large as the file page size, a master thread configured to write data to the blocks of private memory, in real-time and in full resolution, the data acquired during operation of a machine generating the data and written to the blocks of private memory in real-time, the machine including a controller including a processor communicatively coupled to a memory having processor instructions therein, and a write-behind thread configured to acquire pages of memory that are mapped to pages in a file, copy the data from the blocks of private memory to the acquired file-mapped blocks of memory. | 2014-07-03 |
20140189274 | APPARATUS AND METHOD FOR PAGE WALK EXTENSION FOR ENHANCED SECURITY CHECKS - An apparatus and method for managing a protection table by a processor. For example, a processor according to one embodiment of the invention comprises: protection table management logic to manage a protection table, the protection table having an entry for each protected page or each group of protected pages in memory; wherein the protection table management logic prevents direct access to the protection table by user application program code and operating system program code but permits direct access by the processor. | 2014-07-03 |
20140189275 | PROVIDING VERSIONING IN A STORAGE DEVICE - Provided are a computer program product, system and method for managing Input/Output (I/O) requests to a storage device. A write request is received having write data for a logical address, wherein data for the logical address is at a first physical location in the storage device and has an indicated version number. Writing the write data to a second physical location in the storage device. Determining whether a preserve mode is enabled. In response to determining that the preserve mode is enabled, indicating the second physical location as having a current version number of the logical address and indicating the first physical location to have a previous version number of the logical address. | 2014-07-03 |
20140189276 | METADATA CONTAINERS WITH INDIRECT POINTERS - A method is provided for managing a file system including data objects. The data objects, indirect pointers and source pointers are stored in containers that have addresses and include addressable units of a memory. The objects are mapped to addresses for corresponding containers. The indirect pointer in a particular container points to the address of a container in which the corresponding object is stored. The source pointer in the particular container points to the address of the container to which the object in the particular container is mapped. An object in a first container is moved to a second container. The source pointer in the first container is used to find a third container to which the object is mapped. The indirect pointer in the third container is updated to point to the second container. The source pointer in the second container is updated to point to the third container. | 2014-07-03 |
20140189277 | STORAGE CONTROLLER SELECTING SYSTEM, STORAGE CONTROLLER SELECTING METHOD, AND RECORDING MEDIUM - A storage controller selecting system includes a time information storage unit, a receiver, and a processor. The time information storage unit is configured to store internal processing time information for each of a plurality of storage controllers. The internal processing time information for each individual storage controller relates to an internal processing time taken for processing performed within the individual storage controller in response to an access request to a logical volume. The receiver is configured to receive a creation request for requesting creation of a new logical volume. The processor is configured to select a certain storage controller from among the plurality of storage controllers according to the internal processing time information, and to cause the certain storage controller to create the new logical volume. | 2014-07-03 |
20140189278 | METHOD AND APPARATUS FOR ALLOCATING MEMORY SPACE WITH WRITE-COMBINE ATTRIBUTE - Embodiments of the present invention disclose a method and an apparatus for allocating a memory space with a write-combine attribute, including: determining, when resources of devices are scanned, a type and a size of a resource required by each device; determining, after the scanning of the resources of the devices is completed, a total size of write-combine memory spaces required by all first devices; then determining a starting address used to allocate a write-combine memory space to the first devices; and allocating one memory space jointly to all the first devices and allocating, from the one memory space, a sub-memory space to each first device. According to the embodiments of the present invention, a memory space with a write-combine attribute can be allocated to devices in a more reliable manner and by using a relatively simple allocation method. | 2014-07-03 |
20140189279 | METHOD OF COMPRESSING DATA AND DEVICE FOR PERFORMING THE SAME - A data compression method includes receiving an input data stream including a previous data block and a current data block, and executing a first comparison of a part of the previous data block with part of a previous reference data block, and a second comparison of the current data block with a current reference data block, where the first and second comparisons are executed in parallel. The method further includes selectively, based on results of the first and second comparisons, outputting the current data block or compressing an extended data block, where the extended data block includes the part of the previous data block and the current data block. | 2014-07-03 |
20140189280 | Reuse of Host Hibernation Storage Space By Memory Controller - A method for data storage includes, in a host system that operates alternately in a normal state and a hibernation state, reserving a hibernation storage space in a non-volatile storage device for storage of hibernation-related information in preparation for entering the hibernation state. While the host system is operating in the normal state, a storage task other than storage of the hibernation-related information is performed using at least a portion of the reserved hibernation storage space. | 2014-07-03 |
20140189281 | METHODS AND APPARATUS FOR COMPRESSED AND COMPACTED VIRTUAL MEMORY - A method and an apparatus for a memory device including a dynamically updated portion of compressed memory for a virtual memory are described. The memory device can include an uncompressed portion of memory separate from the compressed portion of memory. The virtual memory may be capable of mapping a memory address to the compressed portion of memory. A memory region allocated in the uncompressed portion of memory can be compressed into the compressed portion of memory. As a result, the memory region can become available (e.g. after being compressed) for future allocation requested in the memory device. The compressed portion of memory may be updated to store the compressed memory region. The compressed memory region may be decompressed back to the uncompressed portion in the memory device in response to a request to access data in the compressed memory region. | 2014-07-03 |
20140189282 | STORAGE SYSTEM AND METHOD OF ADJUSTING SPARE MEMORY SPACE IN STORAGE SYSTEM - A method includes determining a size of a recommended spare memory space of each of one or more storage nodes based on a state of the storage nodes, and adjusting a spare memory space of each of the storage nodes based on the size of the recommended spare memory space. | 2014-07-03 |
20140189283 | SEMICONDUCTOR MEMORY DEVICE AND OPERATING METHOD FOR THE SAME - Provided is a semiconductor memory device that may efficiently map an internal address used inside the semiconductor memory device in response to an external address that is applied from the outside of the semiconductor memory device. The semiconductor memory device may include a memory cell array configured to include a first main cell array, a first spare cell array, a second main cell array, and a second spare cell array each of which has internal cells that are selected in response to an internal address, and an address mapping unit configured to map external address as the internal address when the external address designates the first main and spare cell arrays, and to operate calculation with a given value and the external address and to map the calculation result value as the internal address when the external address designates the second main and spare cell arrays. | 2014-07-03 |
20140189284 | SUB-BLOCK BASED WEAR LEVELING - Embodiments of the invention describe an apparatus, system and method for sub-block based wear leveling for memory devices. Embodiments of the invention may receive a write request to a physical memory address including a physical block address and a physical sub-block address. An address remapping table is accessed to translate the physical block address to a memory device block address to locate a plurality of memory device sub-blocks. A plurality of sub-block activity counters are accessed, each sub-block activity counter associated with one of the memory device sub-blocks. One of the plurality of memory device sub-blocks is selected to store write data of the write request based, at least in part, on values of the plurality of sub-block activity counters, and the value of the sub-block activity counter associated with the selected memory device sub-block is updated. | 2014-07-03 |
20140189285 | Apparatus and Method For Tracking TLB Flushes On A Per Thread Basis - A method is described that includes recognizing that TLB information of one or more hardware threads is to be invalidated. The method also includes determining which ones of the one or more hardware threads are in a state in which TLB information is flushed. The method also includes directing a TLB shootdown to those of the or more hardware threads that are in a state in which TLB information is not flushed. | 2014-07-03 |
20140189286 | WEAR LEVELING WITH MARCHING STRATEGY - A method for managing utilization of a memory including a physical address space comprises mapping logical addresses of data objects to locations within the physical address space, and defining a plurality of address segments in the space as an active window. The method comprises allowing writes of data objects having logical addresses mapped to locations within the plurality of address segments in the active window. The method comprises, upon detection of a request to write a data object having a logical address mapped to a location outside the active window, updating the mapping so that the logical address maps to a selected location within the active window, and then allowing the write to the selected location. The method comprises maintaining access data indicating utilization of the plurality of address segments in the active window, and adding and removing address segments from the active window in response to the access data. | 2014-07-03 |
20140189287 | COLLAPSING OF MULTIPLE NESTED LOOPS, METHODS AND INSTRUCTIONS - In an embodiment, the present invention is directed to a processor including a decode logic to receive a multi-dimensional loop counter update instruction and to decode the multi-dimensional loop counter update instruction into at least one decoded instruction, and an execution logic to execute the at least one decoded instruction to update at least one loop counter value of a first operand associated with the multi-dimensional loop counter update instruction by a first amount. Methods to collapse loops using such instructions are also disclosed. Other embodiments are described and claimed. | 2014-07-03 |
20140189288 | INSTRUCTION TO REDUCE ELEMENTS IN A VECTOR REGISTER WITH STRIDED ACCESS PATTERN - A vector reduction instruction with non-unit strided access pattern is received and executed by the execution circuitry of a processor. In response to the instruction, the execution circuitry performs an associative reduction operation on data elements of a first vector register. Based on values of the mask register and a current element position being processed, the execution circuitry sequentially set one or more data elements of the first vector register to a result, which is generated by the associative reduction operation applied to both a previous data element of the first vector register and a data clement of a third vector register. The previous data element is located more than one element position away from the current element position. | 2014-07-03 |
20140189289 | INSTRUCTION FOR ACCELERATING SNOW 3G WIRELESS SECURITY ALGORITHM - Vector instructions for performing SNOW 3G wireless security operations are received and executed by the execution circuitry of a processor. The execution circuitry receives a first operand of the first instruction specifying a first vector register that stores a current state of a finite state machine (FSM). The execution circuitry also receives a second operand of the first instruction specifying a second vector register that stores data elements of a liner feedback shift register (LFSR) that are needed for updating the FSM. The execution circuitry executes the first instruction to produce a updated state of the FSM and an output of the FSM in a destination operand of the first instruction. | 2014-07-03 |
20140189290 | INSTRUCTION FOR FAST ZUC ALGORITHM PROCESSING - Vector instructions for performing ZUC stream cipher operations are received and executed by the execution circuitry of a processor. The execution circuitry receives a first vector instruction to perform an update to a liner feedback shift register (LFSR), and receives a second vector instruction to perform an update to a state of a finite state machine (FSM), where the FSM receives inputs from re-ordered bits of the LFSR. The execution circuitry executes the first vector instruction and the second vector instruction in a single-instruction multiple data (SIMD) pipeline. | 2014-07-03 |
20140189291 | Method And Apparatus For Integral Image Computation Instructions - A method is described that performing an image integral calculation by creating a second vector and creating a third vector. The second vector is created by executing a first instruction that adds alternating elements of a first vector to respective neighboring elements of the first vector and presents resulting summations into said second vector. The first instruction also passes through the respective neighboring elements to said second vector. The third vector is created by executing a second instruction that adds elements of one side of the second vector to an element of another side of the second vector and passes through the another side of the second vector. | 2014-07-03 |
20140189292 | Functional Unit Having Tree Structure To Support Vector Sorting Algorithm and Other Algorithms - An apparatus is described having a functional unit of an instruction execution pipeline. The functional unit has a plurality of compare-and-exchange circuits coupled to network circuitry to implement a vector sorting tree for a vector sorting instruction. Each of the compare-and-exchange circuits has a respective comparison circuit that compares a pair of inputs. Each of the compare-and-exchange circuits have a same sided first output for presenting a higher of the two inputs and a same sided second output for presenting a lower of the two inputs, said comparison circuit to also support said functional unit's execution of a prefix min and/or prefix add instruction. | 2014-07-03 |
20140189293 | Instructions for Sliding Window Encoding Algorithms - A processor is described having an instruction execution pipeline having a functional unit to execute an instruction that compares vector elements against an input value. Each of the vector elements and the input value have a first respective section identifying a location within data and a second respective section having a byte sequence of the data. The functional unit has comparison circuitry to compare respective byte sequences of the input vector elements against the input value's byte sequence to identify a number of matching bytes for each comparison. The functional unit also has difference circuitry to determine respective distances between the input vector ‘s elements’ byte sequences and the input value's byte sequence within the data. | 2014-07-03 |
20140189294 | SYSTEMS, APPARATUSES, AND METHODS FOR DETERMINING DATA ELEMENT EQUALITY OR SEQUENTIALITY - Systems, apparatuses, and methods of performing in a computer processor broadcasting data in response to a single vector packed broadcasting instruction that includes a source writemask register operand, a destination vector register operand, and an opcode. In some embodiments, the data of the source writemask register is zero extended prior to broadcasting. | 2014-07-03 |
20140189295 | Apparatus and Method of Efficient Vector Roll Operation - A machine readable storage medium containing program code is described that when processed by a processor causes a method to be performed. The method includes creating a resultant rolled version of an input vector by forming a first intermediate vector, forming a second intermediate vector and forming a resultant rolled version of an input vector. The first intermediate vector is formed by barrel rolling elements of the input vector along a first of two lanes defined by an upper half and a lower half of the input vector. The second intermediate vector is formed by barrel rolling elements of the input vector along a second of the two lanes. The resultant rolled version of the input vector is formed by incorporating upper portions of one of the intermediate vector's upper and lower halves as upper portions of the resultant's upper and lower halves and incorporating lower portions of the other intermediate vector's upper and lower halves as lower portions of the resultant's upper and lower halves. | 2014-07-03 |
20140189296 | SYSTEM, APPARATUS AND METHOD FOR LOOP REMAINDER MASK INSTRUCTION - A loop remainder mask instruction indicates a current iteration count of a loop as a first operand, an iteration limit of a loop as a second operand, and a destination. The loop contains iterations and each iteration includes a data element of the array. A processor receives the loop remainder mask instruction, decodes the instruction for execution, and stores a result of the execution in the destination. The result indicates a number of data elements of the array past an end of a preceding portion of the array that are to be handled separately from the preceding portion, the end of the preceding portion being where the current iteration count is recorded. | 2014-07-03 |
20140189297 | HETERGENEOUS PROCESSOR APPARATUS AND METHOD - A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a set of two or more small physical processor cores; at least one large physical processor core having relatively higher performance processing capabilities and relatively higher power usage relative to the small physical processor cores; virtual-to-physical (V-P) mapping logic to expose the set of two or more small physical processor cores to software through a corresponding set of virtual cores and to hide the at least one large physical processor core from the software. | 2014-07-03 |
20140189298 | CONFIGURABLE RING NETWORK - A apparatus and computing device for providing a configurable ring network are provided herein. The apparatus includes logic to configure a ring processor for each of a plurality of processing elements, and logic to network each ring processor, wherein each ring processor communicates with other ring processors using a set of commands. | 2014-07-03 |
20140189299 | HETERGENEOUS PROCESSOR APPARATUS AND METHOD - A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a set of large physical processor cores; a set of small physical processor cores having relatively lower performance processing capabilities and relatively lower power usage relative to the large physical processor cores; virtual-to-physical (V-P) mapping logic to expose the set of large physical processor cores to software through a corresponding set of virtual cores and to hide the set of small physical processor core from the software. | 2014-07-03 |
20140189300 | Processing Core Having Shared Front End Unit - A processor having one or more processing cores is described. Each of the one or more processing cores has front end logic circuitry and a plurality of processing units. The front end logic circuitry is to fetch respective instructions of threads and decode the instructions into respective micro-code and input operand and resultant addresses of the instructions. Each of the plurality of processing units is to be assigned at least one of the threads, is coupled to said front end unit, and has a respective buffer to receive and store microcode of its assigned at least one of the threads. Each of the plurality of processing units also comprises: i) at least one set of functional units corresponding to a complete instruction set offered by the processor, the at least one set of functional units to execute its respective processing unit's received microcode; ii) registers coupled to the at least one set of functional units to store operands and resultants of the received microcode; iii) data fetch circuitry to fetch input operands for the at least one functional units' execution of the received microcode. | 2014-07-03 |
20140189301 | HIGH DYNAMIC RANGE SOFTWARE-TRANSPARENT HETEROGENEOUS COMPUTING ELEMENT PROCESSORS, METHODS, AND SYSTEMS - A processor of an aspect includes at least one lower processing capability and lower power consumption physical compute element and at least one higher processing capability and higher power consumption physical compute element. Migration performance benefit evaluation logic is to evaluate a performance benefit of a migration of a workload from the at least one lower processing capability compute element to the at least one higher processing capability compute element, and to determine whether or not to allow the migration based on the evaluated performance benefit. Available energy and thermal budget evaluation logic is to evaluate available energy and thermal budgets and to determine to allow the migration if the migration fits within the available energy and thermal budgets. Workload migration logic is to perform the migration when allowed by both the migration performance benefit evaluation logic and the available energy and thermal budget evaluation logic. | 2014-07-03 |
20140189302 | OPTIMAL LOGICAL PROCESSOR COUNT AND TYPE SELECTION FOR A GIVEN WORKLOAD BASED ON PLATFORM THERMALS AND POWER BUDGETING CONSTRAINTS - A processor includes multiple physical cores that support multiple logical cores of different core types, where the core types include a big core type and a small core type. A multi-threaded application includes multiple software threads are concurrently executed by a first subset of logical cores in a first time slot. Based on data gathered from monitoring the execution in the first time slot, the processor selects a second subset of logical cores for concurrent execution of the software threads in a second time slot. Each logical core in the second subset has one of the core types that matches the characteristics of one of the software threads. | 2014-07-03 |
20140189303 | MULTISTAGE MODULE EXPANSION SYSTEM AND MULTISTAGE MODULE COMMUNICATION METHOD - A multistage module expansion system and multistage module communication method, applicable to a set-top box, are introduced. The system includes a master module, at least a preceding expansion module, and at least a succeeding expansion module. The master module generates and sends a control instruction to the preceding expansion module and the succeeding expansion module. The preceding expansion module and the succeeding expansion module each determine whether the control instruction is of a type executable by the preceding expansion module and the succeeding expansion module, respectively. If the determination is affirmative, the preceding expansion module creates and sends a preceding data packet to the master module, and the succeeding expansion module creates and sends a succeeding data packet to the preceding expansion module, such that the preceding expansion module sends the succeeding data packet to the master module. | 2014-07-03 |
20140189304 | BIT-LEVEL REGISTER FILE UPDATES IN EXTENSIBLE PROCESSOR ARCHITECTURE - This document discusses, among other things, systems and methods to receive an instruction to selectively update a value of one or more selected bits of a first register, to receive the one or more selected bits of the first register to be updated and one or more selected bits of the first register to remain unchanged, and to selectively update the value of the one or more selected bits of the first register using a first write port without receiving the value of the one or more selected bits of the first register. In an example, the value of the one or more selected bits of the first register can be updated without receiving the value of the first register, in certain applications, reducing the number of read ports required to update the value of the first register. | 2014-07-03 |
20140189305 | REDUNDANT EXECUTION FOR RELIABILITY IN A SUPER FMA ALU - A system, processor and method to increase computational reliability by using underutilized portions of a data path with a SuperFMA ALU. The method allows the reuse of underutilized hardware to implement spatial redundancy by using detection during the dispatch stage to determine if the operation may be executed by redundant hardware in the ALU. During execution, if determination is made that the correct conditions exists as determined by the redundant execution modes, the SuperFMA ALU performs the operation with redundant execution and compares the results for a match in order to generate a computational result. The method to increase computational reliability by using redundant execution is advantageous because the hardware cost of adding support for redundant execution is low and the complexity of implementation of the disclosed method is minimal due to the reuse of existing hardware. | 2014-07-03 |
20140189306 | ENHANCED LOOP STREAMING DETECTOR TO DRIVE LOGIC OPTIMIZATION - An enhanced loop streaming detection mechanism is provided in a processor to reduce power consumption. The processor includes a decoder to decode instructions in a loop into micro-operations, and a loop streaming detector to detect the presence of the loop in the micro-operations. The processor also includes a loop characteristic tracker unit to identify hardware components downstream from the decoder that are not to be used by the micro-operations in the loop, and to disable the identified hardware components. The processor also includes execution circuitry to execute the micro-operations in the loop with the identified hardware components disabled. | 2014-07-03 |
20140189307 | METHODS, APPARATUS, INSTRUCTIONS, AND LOGIC TO PROVIDE VECTOR ADDRESS CONFLICT RESOLUTION WITH VECTOR POPULATION COUNT FUNCTIONALITY - Instructions and logic provide SIMD address conflict resolution with vector population count functionality. Some embodiments include processors with a register with a variable plurality of data fields, each of the data fields to store a variable second plurality of bits. A destination register has corresponding data fields, each of these data fields to store a count of the number of bits set to one for corresponding data fields. Responsive to decoding a vector population count instruction, execution units count the number of bits set to one for each of data fields in the register, and store the counts in corresponding data fields of the first destination register. Vector population count instructions can be used with variable sized elements and conflict masks to generate iteration counts and completion masks to be used each iteration to resolve dependencies in gather-modify-scatter SIMD operations. | 2014-07-03 |
20140189308 | METHODS, APPARATUS, INSTRUCTIONS, AND LOGIC TO PROVIDE VECTOR ADDRESS CONFLICT DETECTION FUNCTIONALITY - Instructions and logic provide SIMD address conflict detection functionality. Some embodiments include processors with a register with a variable plurality of data fields, each of the data fields to store an offset for a data element in a memory. A destination register has corresponding data fields, each of these data fields to store a variable second plurality of bits to store a conflict mask having a mask bit for each offset. Responsive to decoding a vector conflict instruction, execution units compare the offset in each data field with every less significant data field to determine if they hold a matching offset, and in corresponding conflict masks in the destination register, set any mask bits corresponding to a less significant data field with a matching offset. Vector address conflict detection can be used with variable sized elements and to generate conflict masks to resolve dependencies in gather-modify-scatter SIMD operations. | 2014-07-03 |
20140189309 | METHODS, APPARATUS, INSTRUCTIONS, AND LOGIC TO PROVIDE PERMUTE CONTROLS WITH LEADING ZERO COUNT FUNCTIONALITY - Instructions and logic provide SIMD permute controls with leading zero count functionality. Some embodiments include processors with a register with a plurality of data fields, each of the data fields to store a second plurality of bits. A destination register has corresponding data fields, each of these data fields to store a count of the number of most significant contiguous bits set to zero for corresponding data fields. Responsive to decoding a vector leading zero count instruction, execution units count the number of most significant contiguous bits set to zero for each of data fields in the register, and store the counts in corresponding data fields of the first destination register. Vector leading zero count instructions can be used to generate permute controls and completion masks to be used along with the set of permute controls, to resolve dependencies in gather-modify-scatter SIMD operations. | 2014-07-03 |
20140189310 | FAULT DETECTION IN INSTRUCTION TRANSLATIONS - In one embodiment, a method for identifying and replacing code translations that generate spurious fault events includes detecting, while executing a first native translation of target instruction set architecture (ISA) instructions, occurrence of a fault event, executing the target ISA instructions or a functionally equivalent version thereof, determining whether occurrence of the fault event is replicated while executing the target ISA instructions or the functionally equivalent version thereof, and in response to determining that the fault event is not replicated, determining whether to allow future execution of the first native translation or to prevent such future execution in favor of forming and executing one or more alternate native translations. | 2014-07-03 |
20140189311 | SYSTEM AND METHOD FOR PERFORMING A SHUFFLE INSTRUCTION - An apparatus and method for performing a shuffle operation on packed data using computer-implemented steps is described. In one embodiment, a first packed data operand having at least two data elements is accessed. A second packed data operand having at least two data elements is accessed. One of the data elements in the first packed data operand is shuffled into a lower destination field of a destination register, and one of the data elements in the second packed data operand is shuffled into an upper destination field of the destination register. | 2014-07-03 |
20140189312 | PROGRAMMABLE HARDWARE ACCELERATORS IN CPU - Embodiments of the present invention may include a data processing system comprising a processing execution block to execute instructions stored in an instruction queue, a programmable hardware accelerator, and a controller programmed to monitor the instruction queue to detect a first type of instructions stored in the instruction queue, reprogram the programmable hardware accelerator to execute the first type of instructions, and transmit the first type of instructions to the programmable hardware accelerator to be executed. | 2014-07-03 |
20140189313 | QUEUED INSTRUCTION RE-DISPATCH AFTER RUNAHEAD - Various embodiments of microprocessors and methods of operating a microprocessor during runahead operation are disclosed herein. One example method of operating a microprocessor includes identifying a runahead-triggering event associated with a runahead-triggering instruction and, responsive to identification of the runahead-triggering event, entering runahead operation and inserting the runahead-triggering instruction along with one or more additional instructions in a queue. The example method also includes resuming non-runahead operation of the microprocessor in response to resolution of the runahead-triggering event and re-dispatching the runahead-triggering instruction along with the one or more additional instructions from the queue to the execution logic. | 2014-07-03 |
20140189314 | Real Time Instruction Trace Processors, Methods, and Systems - A method of an aspect includes generating real time instruction trace (RTIT) packets for a first logical processor of a processor. The RTIT packets indicate a flow of software executed by the first logical processor. The RTIT packets are stored in an RTIT queue corresponding to the first logical processor. The RTIT packets are transferred from the RTIT queue to memory predominantly with firmware of the processor. Other methods, apparatus, and systems are also disclosed. | 2014-07-03 |
20140189315 | Copy-On-Write Buffer For Restoring Program Code From A Speculative Region To A Non-Speculative Region - An apparatus is described having an out-of-order instruction execution pipeline. The out-of-order execution pipeline has a first circuit and a second circuit. The first circuit is to hold a pointer to physical storage space where information is kept that cannot yet be confirmed as being free of potential dependencies on the information. The second circuit is to hold the pointer if the pointer existed in the first circuit when a non speculative region of program code ended and upon retirement of a following speculative overwriter instruction originally coded to overwrite the information. | 2014-07-03 |
20140189316 | EXECUTION PIPELINE DATA FORWARDING - In one embodiment, in an execution pipeline having a plurality of execution subunits, a method of using a bypass network to directly forward data from a producing execution subunit to a consuming execution subunit is provided. The method includes producing output data with the producing execution subunit, consuming input data with the consuming execution subunit, for one or more intervening operations whose input is the output data from the producing execution subunit and whose output is the input data to the consuming execution subunit, evaluating those one or more intervening operations to determine whether their execution would compose an identify function, and if the one or more intervening operations would compose such an identity function, controlling the bypass network to forward the producing execution subunit's output data directly to the consuming execution subunit. | 2014-07-03 |
20140189317 | APPARATUS AND METHOD FOR A HYBRID LATENCY-THROUGHPUT PROCESSOR - An apparatus and method are described for executing both latency-optimized execution logic and throughput-optimized execution logic on a processing device. For example, a processor according to one embodiment comprises: latency-optimized execution logic to execute a first type of program code; throughput-optimized execution logic to execute a second type of program code, wherein the first type of program code and the second type of program code are designed for the same instruction set architecture; logic to identify the first type of program code and the second type of program code within a process and to distribute the first type of program code for execution on the latency-optimized execution logic and the second type of program code for execution on the throughput-optimized execution logic. | 2014-07-03 |
20140189318 | AUTOMATIC REGISTER PORT SELECTION IN EXTENSIBLE PROCESSOR ARCHITECTURE - This document discusses, among other things, systems and methods to access n consecutive entries of a register file in a single operation using a register file entry index consisting of B bits, wherein B is less than the binary logarithm of a depth of the register file, which corresponds to the number of entries in the register file, and to automatically select, for a set of register arguments for the n consecutive entries, between a register port for each argument requiring a register port or one or more shared register ports for the set of register arguments according to description of an instruction set architecture associated with the register file. | 2014-07-03 |
20140189319 | Opportunistic Utilization of Redundant ALU - A processor includes at least one processing core that includes an operation dispatch for dispatching operations from an instruction pipeline, a plurality of arithmetic logic units for executing the operations, a plurality of multiplexers, each of which connects the operation dispatch to a respective arithmetic logic unit, and a controller configured to selectively enable at least one multiplexer to connect the operation dispatch to at least one arithmetic logic unit based on a reliability mode associated with the operation. | 2014-07-03 |
20140189320 | Instruction for Determining Histograms - A processor is described having a functional unit of an instruction execution pipeline. The functional unit has comparison bank circuitry and adder circuitry. The comparison bank circuitry is to compare one or more elements of a first input vector against an element of a second input vector. The adder circuitry is coupled to the comparison bank circuitry to add the number of elements of the second input vector that match a value of the first input vector on an element by element basis of the first input vector. | 2014-07-03 |
20140189321 | INSTRUCTIONS AND LOGIC TO VECTORIZE CONDITIONAL LOOPS - Instructions and logic provide vectorization of conditional loops. A vector expand instruction has a parameter to specify a source vector, a parameter to specify a conditions mask register, and a destination parameter to specify a destination vector to hold n consecutive vector elements, each of the plurality of n consecutive vector elements having a same variable partition size of m bytes. In response to the processor instruction, data is copied from consecutive vector elements in the source vector, and expanded into unmasked vector elements of the specified destination vector, without copying data into masked vector elements of the destination vector, wherein n varies responsive to the processor instruction executed. The source vector may be a register and the destination vector may be in memory. Some embodiments store counts of the condition decisions. Alternative embodiments may store other data, for example such as target addresses, or table offsets, or indicators of processing directives, etc. | 2014-07-03 |
20140189322 | Systems, Apparatuses, and Methods for Masking Usage Counting - Embodiments of systems, apparatuses, and methods for counting instructions of a particular type are described herein. In some embodiments, a processor includes a plurality of write mask registers, logic to determine write mask register usage of an instruction in a particular manner and a counter to count a number of instances of instructions that have been determined to use a write mask register in the particular manner. | 2014-07-03 |
20140189323 | APPARATUS AND METHOD FOR PROPAGATING CONDITIONALLY EVALUATED VALUES IN SIMD/VECTOR EXECUTION - An apparatus and method for propagating conditionally evaluated values. For example, a method according to one embodiment comprises: reading each value contained in an input mask register, each value being a true value or a false value and having a bit position associated therewith; for each true value read from the input mask register, generating a first result containing the bit position of the true value; for each false value read from the input mask register following the first true value, adding the vector length of the input mask register to a bit position of the last true value read from the input mask register to generate a second result; and storing each of the first results and second results in bit positions of an output register corresponding to the bit positions read from the input mask register. | 2014-07-03 |
20140189324 | PHYSICAL REGISTER TABLE FOR ELIMINATING MOVE INSTRUCTIONS - Embodiments of an invention for a physical register table for eliminating move instructions are disclosed. In one embodiment, a processor includes a physical register file, a register allocation table, and a physical register table. The register allocation table is to store mappings of logical registers to physical registers. The physical register table is to store entries including pointers to physical registers in the mappings. The number of entry locations in the physical register table is less than the number of physical registers in the physical register file. | 2014-07-03 |
20140189325 | PAGING IN SECURE ENCLAVES - Embodiments of an invention for paging in secure enclaves are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive a first instruction. The execution unit is to execute the first instruction, wherein execution of the first instruction includes evicting a first page from an enclave page cache. | 2014-07-03 |
20140189326 | MEMORY MANAGEMENT IN SECURE ENCLAVES - Embodiments of an invention for memory management in secure enclaves are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive a first instruction and a second instruction. The execution unit is to execute the first instruction, wherein execution of the first instruction includes allocating a page in an enclave page cache to a secure enclave. The execution unit is also to execute the second instruction, wherein execution of the second instruction includes confirming the allocation of the page. | 2014-07-03 |
20140189327 | ACKNOWLEDGEMENT FORWARDING - A method for processing data packets in a pipeline and executed by a network processor. The pipeline includes a plurality of logical blocks, each logical block configured to process one stage of the pipeline. Each data packet includes a descriptor and a data. The network processor is coupled to a resource for storing the data. The method reduces latency and enables non-blocking processing of data packets by forwarding a unique identification of a write request from a first logical block to a subsequent second logical block in the pipeline, the write request to modify the data in the resource. The method includes receiving the descriptor for processing at the first logical block, generating the write request and the unique identification for the write request, transmitting the write request to the resource, and transmitting the unique identification towards the second logical block before an acknowledgement is returned by the resource. | 2014-07-03 |
20140189328 | POWER REDUCTION BY USING ON-DEMAND RESERVATION STATION SIZE - A computer processor, a computer system and a corresponding method involve a reservation station that stores instructions which are not ready for execution. The reservation station includes a storage area that is divided into bundles of entries. Each bundle is switchable between an open state in which instructions can be written into the bundle and a closed state in which instructions cannot be written into the bundle. A controller selects which bundles are open based on occupancy levels of the bundles. | 2014-07-03 |
20140189329 | COOPERATIVE THREAD ARRAY GRANULARITY CONTEXT SWITCH DURING TRAP HANDLING - Techniques are provided for handling a trap encountered in a thread that is part of a thread array that is being executed in a plurality of execution units. In these techniques, a data structure with an identifier associated with the thread is updated to indicate that the trap occurred during the execution of the thread array. Also in these techniques, the execution units execute a trap handling routine that includes a context switch. The execution units perform this context switch for at least one of the execution units as part of the trap handling routine while allowing the remaining execution units to exit the trap handling routine before the context switch. One advantage of the disclosed techniques is that the trap handling routine operates efficiently in parallel processors. | 2014-07-03 |
20140189330 | OPTIONAL BRANCHES - Branch instructions are provided for improved execution performance. The branch instruction includes one or more paths that are marked as a safe path for execution. If a marked path is executed based on a branch prediction, the execution continues until completion after it is determined that the other path is the correct path. | 2014-07-03 |
20140189331 | SYSTEM OF IMPROVED LOOP DETECTION AND EXECUTION - An method may include identifying loop information corresponding to a plurality of loop instructions. The loop instructions are stored into a queue. The loop instructions are replayed from the queue for execution. Loop iteration is counted based on the identified loop information. A determination of whether the last iteration of the loop is done. If the last iteration is not done, then continue replaying the loop instructions, until the last iteration is done. | 2014-07-03 |
20140189332 | APPARATUS AND METHOD FOR LOW-LATENCY INVOCATION OF ACCELERATORS - An apparatus and method are described for providing low-latency invocation of accelerators. For example, a processor according to one embodiment comprises: a command register for storing command data identifying a command to be executed; a result register to store a result of the command or data indicating a reason why the command could not be executed; execution logic to execute a plurality of instructions including an accelerator invocation instruction to invoke one or more accelerator commands; and one or more accelerators to read the command data from the command register and responsively attempt to execute the command identified by the command data. | 2014-07-03 |
20140189333 | APPARATUS AND METHOD FOR TASK-SWITCHABLE SYNCHRONOUS HARDWARE ACCELERATORS - A processor comprising: execution logic to execute a first thread including an accelerator invocation instruction to invoke an accelerator command; an accelerator to execute an accelerator thread in response to the accelerator command, the accelerator to store state data associated with the accelerator thread in a application memory area in memory, wherein prior to executing the accelerator thread, the accelerator is to lock entries in a translation lookaside buffer (TLB) associated with the accelerator thread to prevent an exception which might otherwise result. | 2014-07-03 |
20140189334 | ELECTRONIC APPARATUS HIBERNATION RECOVERY SETTING METHOD AND ELECTRONIC APPARATUS HAVING HIBERNATION STATE AND HIBERNATION RECOVERY MECHANISM - An electronic apparatus hibernation recovery setting method for an electronic apparatus is provided. The method includes: assigning different priorities to multiple tasks in process before the electronic apparatus enters a hibernation state; storing multiple image files of the tasks; and first reading and loading the image file for the task having a highest priority when the electronic apparatus recovers from the hibernation state. | 2014-07-03 |
20140189335 | FIRMWARE UPGRADE ERROR DETECTION AND AUTOMATIC ROLLBACK - A system includes a utility meter. The utility meter includes a network interface and a processor. The processor is configured to determine whether the network interface is operational subsequent to a bootup of the utility meter. The processor is also configured to initiate a reboot of the utility meter using known valid firmware instruction set of the utility meter if the network interface is determined to be non-operational. | 2014-07-03 |
20140189336 | METHODS AND APPARATUS TO SUPPORT AUTHENTICATED VARIABLES - Methods and apparatus to support authenticated variables are disclosed. An example method includes, in response to an update request directed to an authenticated variable of a computing platform and received during a second stage of a first instance of a booting process, the booting process including a first stage and the second stage, restricting the update request from accessing the authenticated variable during the second stage of the first instance of the booting process and storing the update request in a queue. | 2014-07-03 |
20140189337 | ELECTRONIC DEVICE HAVING UPDATABLE BIOS AND BIOS UPDATING METHOD THEREOF - An electronic device having updatable BIOS is used to perform a BIOS updating method. The electronic device electrically connects to a server, in which update data is stored. The electronic device includes a Basic Input/Output System (BIOS), a network connection module and a switch. A BIOS program is stored in the BIOS, and a connecting program is stored in the network connection module for connecting to the server. When the electronic device is updating, the BIOS switches to electrically connect to the network connection module via the switch, and the network connection module connects to the server by executing the connecting program, downloads the update data applying to the BIOS, and overwrites the update data to the BIOS to update the BIOS program. | 2014-07-03 |
20140189338 | ELECTRONIC DEVICE AND METHOD FOR DETECTING BOOTING TIME PERIOD FOR ELECTRONIC DEVICE - An electronic device includes a boot controlling chip, a power switch, a display unit, a power management unit and a processing unit. The power switch generates an electronic signal to the boot controlling chip while being triggered by a user. The boot controlling chip includes a boot sequence controlling module and a timer. The boot sequence controlling module controls the electronic device to boot in preset steps when the electronic signal is received. The timer starts timing when the electronic signal is received and ends timing when the boot sequence controlling module controls the electronic device to finish a login step of the boot steps, and the timer obtains a time of the interval. The processing unit controls the display unit to show the time value. A method for detecting boot time of an electronic device is also provided. | 2014-07-03 |
20140189339 | Method For Switching Between Virtualized and Non-Virtualized System Operation - A method performed by an embedded system controlled by a CPU and capable of operating as a virtualized system under supervision of a hypervisor or as a non-virtualized system under supervision of an operating system, is provided. The embedded system is executed in a normal mode if no execution of any security critical function is required, where the normal mode execution is performed under supervision of the operating system. If a security critical function execution is required, where protected mode execution is performed under supervision of the hypervisor, the operating system is switching execution of the embedded system from normal mode to protected mode, by handing over the execution of the embedded system from the operating system to the hypervisor. When execution of the security critical function is no longer required by the system is switched from protected mode to normal mode, under supervision of the hypervisor. | 2014-07-03 |
20140189340 | SECURE BOOT INFORMATION WITH VALIDATION CONTROL DATA SPECIFYING A VALIDATION TECHNIQUE - Examples disclosed herein relate to secure boot information with validation control data specifying a validation technique. Examples include determining, with the specified validation technique, whether validation data is consistent with the secure boot information. | 2014-07-03 |
20140189341 | ELECTRONIC DEVICE HAVING AN ACTIVE EDGE - An electronic device is provided that includes a base, a processor, and a tablet having a front surface, a rear surface and a bottom edge surface. A processor may operate at a first operating condition when the tablet is coupled to the base, and the processor may operate at a second operating condition when the tablet is not coupled to the base. The tablet may include a heat conducting device and an active edge. The heat conducting device may conduct heat from the processor to the active edge where the heat may be dissipated using supplemental cooling. | 2014-07-03 |
20140189342 | METHOD FOR CONTROLLING REGISTRATION OF INPUT DEVICE IN INPUT HANDLER INSTANCE, TERMINAL AND STORAGE DEVICE - A method for controlling an input device to be registered with an input handler instance includes: an input handler instance corresponding to a CPU frequency adjusting mode obtains device driver information of an input device upon detection of the input device; determines whether the device driver information is the same as one of sets of registration match information stored in the input handler instance; if so, then sends successful registration information to an input device instance corresponding to the input device to allow an input event to be reported; if the device drive information of the input device is not the same as any of the sets of registration match information, then sends failure registration information to the input device instance to disallow an input event to be reported; and the input device instance stores an identifier of the input handler instance upon reception of the successful registration information. | 2014-07-03 |
20140189343 | SECURE INTERNET PROTOCOL (IP) FRONT-END FOR VIRTUALIZED ENVIRONMENTS - An IPSec front-end may be configured to encrypt, decrypt and authenticate packets on behalf of a host on an insecure network and a peer on a secure network. For example, the IPSec front-end may receive internet protocol (IP) packets from the host and encrypt the data and format the data as an internet protocol security (IPsec) packet for transmission to the peer. When the peer responds with an IPSec packet, the IPSec front-end may decrypt the data and format the data as an IP packet. The IPSec front-end may be software executing on a Linux server. | 2014-07-03 |
20140189344 | PROVIDING A WEB PAGE TO A CLIENT - To display pieces of data provided by different servers in one page, a providing apparatus provides a page to a client terminal, the page including data retrieved from a server. The providing apparatus includes a) a page return unit for, upon receipt of a page retrieval request from the client terminal, returning a page including code to the client terminal, the code to be executed on the client terminal, the code causing the client terminal to transmit a data transmission instruction to the server, the data transmission instruction instructing the server to transmit the data to the providing apparatus, b) a data reception unit for receiving the data transmitted by the server, the server having received the data transmission instruction from the client terminal, and c) a transfer unit for transferring the data received from the server, to the client terminal. | 2014-07-03 |
20140189345 | METHOD FOR DEFINING A FILTERING MODULE, ASSOCIATED FILTERING MODULE - A method is provided for defining a filtering module between a first module processing information with a first sensitivity level, and a second module processing information with a second sensitivity level connected, in parallel with the filtering module, by a cryptographic module. The method includes defining a set of filtering rules in a language that can be compiled, defining the properties of messages whereof transmission is allowed between the first and second modules; validation processing the predefined set of rules, validating that a transmission authorization or refusal has in fact been provided by applying the set of rules to any information that may be provided at the input of the filtering module; compiling the predefined set of rules; and integrating the compiled set of rules into a rules database of the filtering module. | 2014-07-03 |