26th week of 2011 patent applcation highlights part 71 |
Patent application number | Title | Published |
20110161561 | VIRTUALIZATION OF CHIP ENABLES - Virtual chip enable techniques perform memory access operations on virtual chip enables rather than physical chip enables. Each virtual chip enable is a construct that includes attributes that correspond to a unique physical or logical memory device. | 2011-06-30 |
20110161562 | REGION-BASED MANAGEMENT METHOD OF NON-VOLATILE MEMORY - A region-based management method of a non-volatile memory is provided. In the region-based management method, the storage space of all chips in the non-volatile memory is divided into physical regions, physical block sets, and physical page sets, and a logical space is divided into virtual regions, virtual blocks, and virtual pages. In the non-volatile memory, each physical block set is the smallest unit of space allocation and garbage collection, and each physical page set is the smallest unit of data access. The region-based management method includes a three-level address translation architecture for converting logical block addresses into physical block addresses. | 2011-06-30 |
20110161563 | BLOCK MANAGEMENT METHOD OF A NON-VOLATILE MEMORY - A block management method applicable to a non-volatile memory storage system is provided. The non-volatile memory storage system includes a plurality of chips. Each chip includes a plurality of physical blocks. The physical blocks form a plurality of physical block sets. Each logical block in a logical space corresponds to at most two physical block sets. In the block management method, when a logical block corresponds to two physical block sets filled with data and more data is to be written, a free physical block set is allocated for storing the data. Then, one of the two physical block sets corresponding to the logical block is selected according to a predetermined criterion. The valid data in the selected physical block set is copied into the free physical block set. Next, the selected physical block set is erased and collected to the pool of free physical block sets. | 2011-06-30 |
20110161564 | BLOCK MANAGEMENT AND DATA WRITING METHOD, AND FLASH MEMORY STORAGE SYSTEM AND CONTROLLER USING THE SAME - A block management method for managing a plurality of physical blocks is provided. The method includes grouping the physical blocks into a plurality of physical units, grouping a portion of the physical units into a data area and a spare area, configuring a plurality of logical units, and grouping the logical units into a plurality of logical unit groups and configuring another portion of the physical units as a plurality of global random physical units corresponding to the logical unit groups, wherein each of the global random physical units corresponds to one of the logical unit groups. The method further includes getting the physical units from the spare area as global random substitute physical units of the global random physical units. Accordingly, the method can store data in the global random physical units or the global random substitute physical units, thereby reducing the time for executing a host write command. | 2011-06-30 |
20110161565 | FLASH MEMORY STORAGE SYSTEM AND CONTROLLER AND DATA WRITING METHOD THEREOF - A flash memory storage system having a flash memory controller and a flash memory chip is provided. The flash memory controller configures a second physical unit of the flash memory chip as a midway cache physical unit corresponding to a first physical unit and temporarily stores first data corresponding to a first host write command and second data corresponding to a second host write command in the midway cache physical unit, wherein the first and second data corresponding to slow physical addresses of the first physical unit. Then, the flash memory controller synchronously copies the first and second data from the midway cache physical unit into the first physical unit, thereby shortening time for writing data into the flash memory chip. | 2011-06-30 |
20110161566 | WRITE TIMEOUT CONTROL METHODS FOR FLASH MEMORY AND MEMORY DEVICES USING THE SAME - A write timeout control method for a flash memory having a plurality of spare blocks and data blocks including a plurality of mother blocks is disclosed. The method includes the steps of: receiving a write command and a starting logical block address; determining an update mode according to a target mother block linked to the starting logical block address; determining whether a pre-clean operation is performed on a first mother block; if so, performing a post-clean operation on the first mother block during a first time period; re-configuring the first mother block as a spare block; performing a programming process to write data on the target mother block; determining whether the number of mother blocks exceeds a first threshold; and if so, performing the pre-clean operation on a second mother block. The first and second mother blocks are configured as blocks to be cleaned. | 2011-06-30 |
20110161567 | MEMORY DEVICE FOR REDUCING PROGRAMMING TIME - A non-volatile memory device includes: first and second planes each comprising a plurality of non-volatile memory cells; first and second buffer corresponding to the first and second planes, respectively; an input/output control unit configured to selectively control input/output paths of data stored in the first and second page buffers; a flash interface connected to the input/output control unit; and a host connected to the flash interface. | 2011-06-30 |
20110161568 | MULTILEVEL MEMORY BUS SYSTEM FOR SOLID-STATE MASS STORAGE - The present invention relates to a multilevel memory bus system for transferring information between at least one DMA controller and at least one solid-state semiconductor memory device, such as NAND flash memory devices or the like. This multilevel memory bus system includes at least one DMA controller coupled to an intermediate bus; a flash memory bus; and a flash buffer circuit between the intermediate bus and the flash memory bus. This multilevel memory bus system may be disposed to support: an n-bit wide bus width, such as nibble-wide or byte-wide bus widths; a selectable data sampling rate, such as a single or double sampling rate, on the intermediate bus; a configurable bus data rate, such as a single, double, quad, or octal data sampling rate; CRC protection; an exclusive busy mechanism; dedicated busy lines; or any combination of these. | 2011-06-30 |
20110161569 | MEMORY MODULE AND METHOD FOR EXCHANGING DATA IN MEMORY MODULE - The present application provides a memory module. The memory module includes one or more volatile memory devices, one or more non-volatile memory devices, and a data exchange controller. The data exchange controller controls data exchange between the volatile memory devices and the non-volatile memory devices. | 2011-06-30 |
20110161570 | NONVOLATILE SEMICONDUCTOR MEMORY DEVICES, DATA UPDATING METHODS THEREOF, AND NONVOLATILE SEMICONDUCTOR MEMORY SYSTEMS - Integrated circuit memory devices utilize techniques to improve the timing of data update operations within a non-volatile memory, by more efficiently combining memory cell programming operations with threshold voltage adjust operations on erased memory cells. These adjust operations operate to narrow a threshold voltage distribution between memory cells that remain in an erased state after the programming operation has been performed. An integrated circuit memory device may include at least a first block of non-volatile memory cells and a volatile memory device, which has a data storage capacity equivalent to at least a capacity of the at least a first block of non-volatile memory cells. A memory controller is also provided, which is electrically coupled to the at least a first block of non-volatile memory cells and the volatile memory device. The memory controller is configured to, among other things, control data update operations within a block of data stored within the first block of non-volatile memory cells. | 2011-06-30 |
20110161571 | FLASH MEMORY DEVICE AND METHOD OF PROGRAMMING FLASH MEMORY DEVICE - A flash memory device performs a program operation using an incremental step pulse programming (ISPP) scheme comprising a plurality of program loops. In each of the program loops, a program pulse operation is performed to increase the threshold voltages of selected memory cells, and a program verify operation is performed to verify a program status of the selected memory cells. The program verify operation can be selectively skipped in some program loops based on a voltage increment of one or more of the program pulse operations, an amount by which threshold voltages of the selected memory cells are to be increased in the ISPP scheme, or a total number of program loops of the ISPP scheme. | 2011-06-30 |
20110161572 | Executing Applications From a Semiconductor Nonvolatile Memory - A processor-based device (e.g., a wireless device) may include a processor and a semiconductor nonvolatile memory to directly execute an application (e.g., an execute-in-place application) using an associated database. Within a flash memory, in one embodiment, an executable program may be separately stored in a non-fragmented manner from a resident database that includes program management information for use in an execution that does not involve a random access memory, saving time and resources. | 2011-06-30 |
20110161573 | DEVICE IDENTIFIERS FOR NONVOLATILE MEMORY MODULES - A memory card has a data scrambler that performs a data scrambling operation on data stored in the memory card according to a device ID associated with the memory card. The device ID is either set at the factory and permanently stored in the card, or configurable by a user or a host system. | 2011-06-30 |
20110161574 | SETTING CONTROL APPARATUS AND METHOD FOR OPERATING SETTING CONTROL APPARATUS - A setting control apparatus includes a setting control part, a special register, and a read-out control part. The setting control part makes stored in a temporary storage part a control value used in a processing circuit, in response to an input of the control value. The special register is electrically connected to the processing circuit and serving as a storage element capable of storing the control value. The read-out control part controls a read-out operation for reading out the control value from the temporary storage part into the special register. The read-out control part performs the read-out operation at a predetermined timing after storing of the control value in the temporary storage part is completed. | 2011-06-30 |
20110161575 | MICROCODE REFACTORING AND CACHING - Methods and apparatus relating to microcode refactoring and/or caching are described. In some embodiments, an off-chip structure that stores microcode is shared by multiple processor cores. Other embodiments are also described and claimed. | 2011-06-30 |
20110161576 | MEMORY MODULE AND MEMORY SYSTEM COMPRISING MEMORY MODULE - A memory module comprises a plurality of semiconductor memory devices each having a termination circuit for a command/address bus. The semiconductor memory devices are formed in a substrate of the memory module, and they operate in response to a command/address signal, a data signal, and a termination resistance control signal. | 2011-06-30 |
20110161577 | DATA STORAGE SYSTEM, ELECTRONIC SYSTEM, AND TELECOMMUNICATIONS SYSTEM - A data storage system comprising a plurality of buffers configured to store data, a read pointer to indicate a particular one of the plurality of buffers from which data should be read, and a write pointer to indicate a particular one of the plurality of buffers to which data should be written is disclosed. The write pointer points at least one buffer ahead of the buffer to which the read pointer is pointing. An electronic system and a telecommunication system are further disclosed. | 2011-06-30 |
20110161578 | SEMICONDUCTOR MEMORY DEVICE PERFORMING PARTIAL SELF REFRESH AND MEMORY SYSTEM INCLUDING SAME - A semiconductor memory device capable of performing a partial self refresh and semiconductor memory system including same is provided. The semiconductor memory device includes: a memory circuit including a memory array; a skip address storage unit storing an address of an excluded region not requiring refresh in the memory array as a skip address; a refresh address generator providing an address of a region of the memory array requiring refresh as a refresh address; and an address comparator receiving and comparing the skip address and refresh address, and providing a refresh control signal to the memory circuit based on the comparison. | 2011-06-30 |
20110161579 | Method and System for Minimizing Impact of Refresh Operations on Volatile Memory Performance - A memory system is provided. The system includes a volatile memory, a refresh counter configured to monitor a number of advanced refreshes performed in the volatile memory, and a controller configured to check the refresh counter to determine whether a regularly scheduled refresh can be skipped in response to detecting a request for the regularly scheduled refresh. | 2011-06-30 |
20110161580 | PROVIDING DYNAMIC DATABASES FOR A TCAM - A network device allocates a particular number of memory blocks in a ternary content-addressable memory (TCAM) of the network device to each database of multiple databases, and creates a list of additional memory blocks in an external TCAM of the network device. The network device also receives, by the external TCAM, a request for an additional memory block to provide one or more rules from one of the multiple databases, and allocates, by the external TCAM and to the requesting database, an additional memory block from the list of additional memory blocks. | 2011-06-30 |
20110161581 | SEMICONDUCTOR CIRCUIT APPARATUS - A semiconductor circuit apparatus having a commonly shared control unit that coordinates reading and writing timed activities in two ranked subcircuits is presented. The semiconductor circuit includes: first and second ranks; and a rank control block shared by the first and second ranks and configured to provide a column-related command and an address to one of the first and second ranks in response to a chip select signal for selecting the first or second rank. | 2011-06-30 |
20110161582 | Advanced Disk Drive Power Management Based on Maximum System Throughput - The disclosed technology identifies bottlenecks in a hierarchical storage subsystem and, based upon the rate at which data may be transmitted through a particular bottleneck, determines the smallest number of disk drives required to match that transmission rate. If the required number of disks is less than the total number of disks, only a subset of the total number are maintained in an active state with the remainder places in either a “standby” or “off” mode. In this manner, overall system power consumption is reduced. In one embodiment, the disclosed techniques are implemented by active disk management at high level of storage infrastructure. | 2011-06-30 |
20110161583 | MEMORY CARD AND MEMORY SYSTEM INCLUDING SEMICONDUCTOR CHIPS IN STACKED STRUCTURE - A memory card and memory system are disclosed. The memory card includes a plurality of ports formed on an external surface of the memory card, a memory controller coupled to the plurality of ports and configured to communicate with an external host through the ports, and to generate a plurality of internal signals for controlling a memory operation based on signals received from the external host, and a memory device coupled to the memory controller and comprising at least two semiconductor chips, which are vertically stacked on each other. Each semiconductor chip comprises a plurality of through substrate vias for receiving the plurality of internal signals from the memory controller. The memory controller generates first and second internal signals based on a first signal received through a first port, and the first and second internal signals are provided to the memory device respectively through first and second signal paths that are electrically isolated from each other. | 2011-06-30 |
20110161584 | SYSTEM AND METHOD FOR INQUIRY CACHING IN A STORAGE AREA NETWORK - A system and method for servicing an inquiry command from a host device requesting inquiry data about a sequential device on a storage area network. The inquiry data may be cached by a circuitry coupled to the host device and the sequential device. The circuitry may reside in a router. In some embodiments, depending upon whether the sequential device is available to process the inquiry command, the circuitry may forward the inquiry command to the sequential device or process the inquiry command itself, utilizing a cached version of the inquiry data. The cached version may include information indicating that the sequential device is not available. In some embodiments, regardless whether the sequential device is available, the circuitry may process the inquiry command and return the inquiry data from a cache memory. | 2011-06-30 |
20110161585 | PROCESSING NON-OWNERSHIP LOAD REQUESTS HITTING MODIFIED LINE IN CACHE OF A DIFFERENT PROCESSOR - Methods and apparatus to efficiently process non-ownership load requests hitting modified line (M-line) in cache of a different processor are described. In one embodiment, a first agent changes the state of a first data and forwards it to a second, requesting agent who stores the first data in an alternative modified state. Other embodiments are also described. | 2011-06-30 |
20110161586 | Shared Memories for Energy Efficient Multi-Core Processors - Technologies are described herein related to multi-core processors that are adapted to share processor resources. An example multi-core processor can include a plurality of processor cores. The multi-core processor further can include a shared register file selectively coupled to two or more of the plurality of processor cores, where the shared register file is adapted to serve as a shared resource among the selected processor cores. | 2011-06-30 |
20110161587 | PROACTIVE PREFETCH THROTTLING - According to a method of data processing, a memory controller receives a plurality of data prefetch requests from multiple processor cores in the data processing system, where the plurality of prefetch load requests include a data prefetch request issued by a particular processor core among the multiple processor cores. In response to receipt of the data prefetch request, the memory controller provides a coherency response indicating an excess number of data prefetch requests. In response to the coherency response, the particular processor core reduces a rate of issuance of data prefetch requests. | 2011-06-30 |
20110161588 | FORMATION OF AN EXCLUSIVE OWNERSHIP COHERENCE STATE IN A LOWER LEVEL CACHE - In response to a memory access request of a processor core that targets a target cache line, the lower level cache of a vertical cache hierarchy associated with the processor core supplies a copy of the target cache line to an upper level cache in the vertical cache hierarchy and retains a copy in a shared coherence state. The upper level cache holds the copy of the target cache line in a private shared ownership coherence state indicating that each cached copy of the target memory block is cached within the vertical cache hierarchy associated with the processor core. In response to the upper level cache signaling replacement of the copy of the target cache line in the private shared ownership coherence state, the lower level cache updates its copy of the target cache line to the exclusive ownership coherence state without coherency messaging with other vertical cache hierarchies. | 2011-06-30 |
20110161589 | SELECTIVE CACHE-TO-CACHE LATERAL CASTOUTS - A data processing system includes first and second processing units and a system memory. The first processing unit has first upper and first lower level caches, and the second processing unit has second upper and lower level caches. In response to a data request, a victim cache line to be castout from the first lower level cache is selected, and the first lower level cache selects between performing a lateral castout (LCO) of the victim cache line to the second lower level cache and a castout of the victim cache line to the system memory based upon a confidence indicator associated with the victim cache line. In response to selecting an LCO, the first processing unit issues an LCO command on the interconnect fabric and removes the victim cache line from the first lower level cache, and the second lower level cache holds the victim cache line. | 2011-06-30 |
20110161590 | SYNCHRONIZING ACCESS TO DATA IN SHARED MEMORY VIA UPPER LEVEL CACHE QUEUING - A processing unit includes a store-in lower level cache having reservation logic that determines presence or absence of a reservation and a processor core including a store-through upper level cache, an instruction execution unit, a load unit that, responsive to a hit in the upper level cache on a load-reserve operation generated through execution of a load-reserve instruction by the instruction execution unit, temporarily buffers a load target address of the load-reserve operation, and a flag indicating that the load-reserve operation bound to a value in the upper level cache. If a storage-modifying operation is received that conflicts with the load target address of the load-reserve operation, the processor core sets the flag to a particular state, and, responsive to execution of a store-conditional instruction, transmits an associated store-conditional operation to the lower level cache with a fail indication if the flag is set to the particular state. | 2011-06-30 |
20110161591 | INCREASED NAND FLASH MEMORY READ THROUGHPUT - A method of reading sequential pages of flash memory from alternating memory blocks comprises loading data from a first page into a first primary data cache and a second page into a second primary data cache simultaneously, the first and second pages loaded from different blocks of flash memory. Data from the first primary data cache is stored in a first secondary data cache, and data from the second primary data cache is stored in a second secondary data cache. Data is sequentially provided from the first and second secondary data caches by a multiplexer coupled to the first and second data caches. | 2011-06-30 |
20110161592 | Dynamic system reconfiguration - In some embodiments system reconfiguration code and data to be used to perform a dynamic hardware reconfiguration of a system including a plurality of processor cores is cached and any direct or indirect memory accesses during the dynamic hardware reconfiguration are prevented. One of the processor cores executes the cached system reconfiguration code and data in order to dynamically reconfigure the hardware. Other embodiments are described and claimed. | 2011-06-30 |
20110161593 | Cache unit, arithmetic processing unit, and information processing unit - A cache unit comprising a register file that selects an entry indicated by a cache index of n bits (n is a natural number) that is used to search for an instruction cache tag, using multiplexer groups having n stages respectively corresponding to the n bits of the cache index. Among the multiplexer groups having n stages, a multiplexer group in an m | 2011-06-30 |
20110161594 | INFORMATION PROCESSING DEVICE AND CACHE MEMORY CONTROL DEVICE - An information processor includes processing units each processes an out-of-order memory access and includes a cache memory, an instruction port that holds instructions for accessing data in the cache memory, a first determinator that validates a first flag when a request for invalidating cache data is received after a target data of a load instruction is transferred from the cache memory and a load instruction having a cache index identical to that of a target address of the received invalidating instruction exists, a second determinator that validates a second flag when the target data of the load instruction in the instruction port is transferred after a cache miss of the target data occurred, and a re-execution determinator that instructs to re-execute an instruction that follows the load instruction if the first and the second flags are valid when a load instruction in the instruction port has been completed. | 2011-06-30 |
20110161595 | CACHE MEMORY POWER REDUCTION TECHNIQUES - Methods and apparatus to provide for power consumption reduction in memories (such as cache memories) are described. In one embodiment, a virtual tag is used to determine whether to access a cache way. The virtual tag access and comparison may be performed earlier in the read pipeline than the actual tag access or comparison. In another embodiment, a speculative way hit may be used based on pre-ECC partial tag match to wake up a subset of data arrays. Other embodiments are also described. | 2011-06-30 |
20110161596 | DIRECTORY-BASED COHERENCE CACHING - Techniques are generally described for methods, systems, data processing devices and computer readable media related to multi-core parallel processing directory-based cache coherence. Example systems may include one multi-core processor or multiple multi-core processors. An example multi-core processor includes a plurality of processor cores, each of the processor cores having a respective cache. The system may further include a main memory coupled to each multi-core processor. A directory descriptor cache may be associated with the plurality of the processor cores, where the directory descriptor cache may be configured to store a plurality of directory descriptors. Each of the directory descriptors may provide an indication of the cache sharing status of a respective cache-line-sized row of the main memory. | 2011-06-30 |
20110161597 | Combined Memory Including a Logical Partition in a Storage Memory Accessed Through an IO Controller - A computer system having a combined memory. A first logical partition of the combined memory is a main memory region in a storage memory. A second logical partition of the combined memory is a direct memory region in a main memory. A memory controller comprising a storage controller is configured to receive a memory access request including a real address from a processor, determine whether the real address is for the first logical partition or for the second logical partition. If the address is for the first logical partition the storage controller communicates with an IO controller in the storage memory to service the memory access request. If the address is for the direct memory region, the memory controller services the memory access request in a conventional manner. | 2011-06-30 |
20110161598 | DUAL TIMEOUT CACHING - Embodiments of the present invention provide a method, system and computer program product for dual timer fragment caching. In an embodiment of the invention, a dual timer fragment caching method can include establishing both a soft timeout and also a hard timeout for each fragment in a fragment cache. The method further can include managing the fragment cache by evicting fragments in the fragment cache subsequent to a lapsing of a corresponding hard timeout. The management of the fragment cache also can include responding to multiple requests by multiple requestors for a stale fragment in the fragment cache with a lapsed corresponding soft timeout by returning the stale fragment from the fragment cache to some of the requestors, by retrieving and returning a new form of the stale fragment to others of the requestors, and by replacing the stale fragment in the fragment cache with the new form of the stale fragment with a reset soft timeout and hard timeout. | 2011-06-30 |
20110161599 | Handling of a wait for event operation within a data processing apparatus - A data processing apparatus and method are provided for handling of a wait for event operation. The data processing apparatus forms a portion of a coherent cache system and has a master device for performing data processing operations, including a wait for event operation causing the master device to enter a power saving mode. A cache is coupled to the master device and arranged to store data values for access by the master device when performing the data processing operations. Cache coherency circuitry is responsive to a coherency request from another portion of the coherent cache system, to detect whether a data value identified by the coherency request is present in the cache, and if so to cause a coherency action to be taken in respect of that data value stored in the cache. Wake event circuitry is responsive to the cache coherency circuitry to issue a wake event to the master device if the coherency action is taken. The master device is then responsive to the wake event to exit the power saving mode. Such a mechanism provides a simple and effective technique for causing the master device to exit the power saving mode, which can be used in all hardware implementations of coherent cache systems irrespective of the type of master devices provided within the coherent cache system. | 2011-06-30 |
20110161600 | Arithmetic processing unit, information processing device, and cache memory control method - A processor holds, in a plurality of respective cache lines, part of data held in a main memory unit. The processor also holds, in the plurality of respective cache lines, a tag address used to search for the data held in the cache lines and a flag indicating the validity of the data held in the cache lines. The processor executes a cache line fill instruction on a cache line corresponding to a specified address. Upon execution of the cache line fill instruction, the processor registers predetermined data in the cache line of the cache memory unit which has a tag address corresponding to the specified address and validates a flag in the cache line having the tag address corresponding to the specified address. | 2011-06-30 |
20110161601 | INTER-QUEUE ANTI-STARVATION MECHANISM WITH DYNAMIC DEADLOCK AVOIDANCE IN A RETRY BASED PIPELINE - Methods and apparatus relating to an inter-queue anti-starvation mechanism with dynamic deadlock avoidance in a retry based pipeline are described. In one embodiment, logic may arbitrate between two queues based on various rules. The queues may store data including local or remote requests, data responses, non-data responses, external interrupts, etc. Other embodiments are also disclosed. | 2011-06-30 |
20110161602 | LOCK-FREE CONCURRENT OBJECT DICTIONARY - An object storage system comprises one or more computer processors or threads that can concurrently access a shared memory, the shared memory comprising an array of equally-sized cells. In one embodiment, each cell is of the size used by the processors to represent a pointer, e.g., 64 bits. Using an algorithm performing only one memory write, and using a hardware-provided transactional operation, such as a compare-and-swap instruction, to implement the memory write, concurrent access is safely accommodated in a lock-free manner. | 2011-06-30 |
20110161603 | MEMORY TRANSACTION GROUPING - Various technologies and techniques are described for providing a transaction grouping feature for use in programs operating under a transactional memory system. The transaction grouping feature is operable to allow transaction groups to be created that contain related transactions. The transaction groups are used to enhance performance and/or operation of the programs. Different locking and versioning mechanisms can be used with different transaction groups. When running transactions, a hardware transactional memory execution mechanism can be used for one transaction group while a software transactional memory execution mechanism used for another transaction group. | 2011-06-30 |
20110161604 | WRITER/READER/NO-ACCESS DOMAIN DATA ACCESSIBILITY - Multiple types of executable agents operating within a domain. The domain includes mutable shared state and immutable shared state, with agents internal to the domain only operating on the shared state. Writer agents are defined to be agents that have read access and write access to mutable shared state and read access only to immutable shared state. General reader agents have read access to both mutable shared state and immutable shared state and have no write access. Immutable reader agents have read access to only immutable shared state and have no write access. By appropriate scheduling of the different types of agents, data races may be reduced or eliminated. | 2011-06-30 |
20110161605 | Memory devices and methods of operating the same - A memory device includes a memory cell. The memory cell includes: a bipolar memory element and a bidirectional switching element. The bidirectional switching element is connected to ends of the bipolar memory element, and has a bidirectional switching characteristic. The bidirectional switching element includes: a first switching element and a second switching element. The first switching element is connected to a first end of the bipolar memory element and has a first switching direction. The second switching element is connected to a second end of the bipolar memory element and has a second switching direction. The second switching direction is opposite to the first switching direction. | 2011-06-30 |
20110161606 | SEMICONDUCTOR MEMORY DEVICE AND METHOD OF TESTING THE SAME - According to one embodiment, a nonvolatile semiconductor memory device is disclosed. The semiconductor memory device can include a first memory cell array and a second memory cell array acting in parallel each other, the first memory cell array including a plurality of first blocks and the second memory cell array including a plurality of second blocks, and each of the blocks being an erase unit, a plurality of flag resistors configured to correspond to each of the first blocks and each of the second blocks, a flag data is capable of being written to the flag resistors by selecting block address, a control circuit reading out the flag data in the flag resistor corresponding to the first block and the flag data in the flag resistor corresponding to the second block in parallel fashion, a first counter resistor storing a counting value of the flag data in the flag resistors corresponding to the first blocks of the first memory cell array, and a second counter resistor storing a counting value of the flag data in the flag resistors corresponding to the second blocks of the second memory cell array. | 2011-06-30 |
20110161607 | STORAGE SYSTEM AND CONTROL METHOD THEREFOR - There is provided a storage technique for efficiently solving concentration of accesses to a particular storage medium. In the present invention, in a storage system having multiple storage media which are physically separated, duplicates of multiple blocks forming particular contents are generated, and the duplicated blocks are uniformly (in stripes) stored in storage media other than the storage medium which stores the particular contents (see FIG. | 2011-06-30 |
20110161608 | METHOD TO CUSTOMIZE FUNCTION BEHAVIOR BASED ON CACHE AND SCHEDULING PARAMETERS OF A MEMORY ARGUMENT - Disclosed are a method, a system and a computer program product of operating a data processing system that can include or be coupled to multiple processor cores. In one or more embodiments, each of multiple memory objects can be populated with work items and can be associated with attributes that can include information which can be used to describe data of each memory object and/or which can be used to process data of each memory object. The attributes can be used to indicate one or more of a cache policy, a cache size, and a cache line size, among others. In one or more embodiments, the attributes can be used as a history of how each memory object is used. The attributes can be used to indicate cache history statistics (e.g., a hit rate, a miss rate, etc.). | 2011-06-30 |
20110161609 | INFORMATION PROCESSING APPARATUS AND ITS CONTROL METHOD - Proposed is an information processing apparatus and its control method capable of acquiring the operation results of the same point in time in a plurality of storage apparatuses in a highly reliable manner. With this information processing apparatus connected to a plurality of storage apparatuses and its control method, a time difference between an internal time of the storage apparatus and an internal time of the information processing apparatus is detected regarding each of the plurality of storage apparatuses, a time added with the time difference between the internal time of the storage apparatus and the internal time of the information processing apparatus as an execution time of a predetermined operation is set to the plurality of storage apparatuses at an arbitrary future time, and an execution result of the predetermined operation is collected from each of the plurality of storage apparatuses after the lapse of the future time. | 2011-06-30 |
20110161610 | COMPILER-ENFORCED AGENT ACCESS RESTRICTION - A compiler that enforces, at compile time, domain data access permissions and/or agent data access permissions on at least one agent to be created within a domain. The compiler identifies domain data of a domain to be created, and an agent to be created within the domain at runtime. The domain access permissions of the agent are also identified. As part of compilation of an expression of an agent, a reference to the domain data is identified. Then, the compiler evaluates an operation that the reference to the domain data would impose on the domain data upon evaluating the expression at runtime. The compiler then determines whether or not the operation is in violation of the domain access permissions of the agent with respect to the identified domain data. Agent data access may also be evaluated depending on whether the access occurs by a function or a method. | 2011-06-30 |
20110161611 | METHOD FOR CONTROLLING SEMICONDUCTOR STORAGE SYSTEM CONFIGURED TO MANAGE DUAL MEMORY AREA - A method for controlling a semiconductor storage system configured to manage dual memory areas for protecting the system against abrupt and abnormal power disruptions is presented. The semiconductor storage systems has a first physical area and a second physical area, in which first data having a first logical block address are stored in the first physical area. The method includes providing a write command so that the first data is updated to second data. The method also includes writing the second data in a second physical area in response to the write command. When writing the second data in the second physical area, a corresponding invalid logical address is allocated to the second physical area. | 2011-06-30 |
20110161612 | STORAGE APPARATUS MOUNTING FRAME, STORAGE EXTENSION APPARATUS, AND METHOD OF CONTROLLING STORAGE APPARATUS - By having a storage apparatus attachment portion that secures a storage apparatus; a data read prevention processing unit that makes at least a part of data stored in the storage apparatus unreadable; and an input device that inputs a read prevention instruction for the storage apparatus, and configuring such that the data read prevention processing unit makes the data stored in the storage apparatus unreadable in response to a read prevention instruction received from the input device, data on the storage apparatus is reliably and easily set unreadable, as well as preventing data leakage from a typical storage apparatus with lower cost. | 2011-06-30 |
20110161613 | MEMORY DEVICE, ELECTRONIC SYSTEM, AND METHODS ASSOCIATED WITH MODIFYING DATA AND A FILE OF A MEMORY DEVICE - A memory device, system and method of editing a file in a non-volatile memory device is described. The memory device includes a controller and a memory array configured to copy an existing first file into a second file during editing and to maintain the first file while applying edits to the second file. When editing is completed, a first cluster pointer of the first file is redirected to point at the first cluster of the second file that has been edited. | 2011-06-30 |
20110161614 | Pre-leak detection scan to identify non-pointer data to be excluded from a leak detection scan - A computer-implemented method of detecting memory that may be reclaimed from application data objects that are no longer in use. When at least a first virtual memory region is newly committed for heap block storage, a pre-leak detection scan of other virtual memory regions can be performed to identify at least one non-pointer data item in the other virtual memory regions, the non-pointer data item comprising data that corresponds to an address of a memory location within the first virtual memory region, but that is not a memory pointer. A leak detection scan can be performed to identify potential memory pointers, wherein the identified non-pointer data item is excluded from the identified potential memory pointers. A list of leaked heap blocks can be output. Each leaked heap block can exclusively comprise memory locations that do not have a corresponding potential memory pointer. | 2011-06-30 |
20110161615 | MEMORY MANAGEMENT DEVICE, MEMORY MANAGEMENT METHOD, AND MEMORY MANAGEMENT PROGRAM - One or more embodiments provide a technique of improving the conventional thread-local garbage collection (GC) so as to avoid fragmentation. A memory management device having a plurality of processors implementing transactional memory includes a write barrier processing unit which, when performing write barrier in response to initiation of a pointer write operation, registers an object that is located outside of a local area and that has a pointer pointing to an object located in the local area in a write log so as to set it as a target of conflict detection, and a garbage collector which, provided that no conflict is detected, copies a live shared object in the local area to the outside of the local area and collects any unwanted object irrespective of whether it is shared or not. | 2011-06-30 |
20110161616 | ON DEMAND REGISTER ALLOCATION AND DEALLOCATION FOR A MULTITHREADED PROCESSOR - A system for allocating and de-allocating registers of a processor. The system includes a register file having plurality of physical registers and a first table coupled to the register file for mapping virtual register IDs to physical register IDs. A second table is coupled to the register file for determining whether a virtual register ID has a physical register mapped to it in a cycle. The first table and the second table enable physical registers of the register file to be allocated and de-allocated on a cycle-by-cycle basis to support execution of instructions by the processor. | 2011-06-30 |
20110161617 | SCALABLE PERFORMANCE-BASED VOLUME ALLOCATION IN LARGE STORAGE CONTROLLER COLLECTIONS - A scalable, performance-based, volume allocation technique that can be applied in large storage controller collections is disclosed. A global resource tree of multiple nodes representing interconnected components of a storage system in a plurality of component layers is analyzed to yield gap values for each node (e.g., a bottom-up estimation). The gap value for each node is an estimate of the amount in GB of the new workload that can be allocated in the subtree of that node without exceeding the performance and space bounds at any of the nodes in that subtree. The gap values of the global resource tree are further analyzed to generate an ordered allocation list of the volumes of the storage system (e.g., a top-down selection). The volumes may be applied to a storage workload in the order of the allocation list and the gap values and list are updated. | 2011-06-30 |
20110161618 | ASSIGNING EFFICIENTLY REFERENCED GLOBALLY UNIQUE IDENTIFIERS IN A MULTI-CORE ENVIRONMENT - A mechanism is provided in a multi-core environment for assigning a globally unique core identifier. A Power PC® processor unit (PPU) determines an index alias corresponding to a natural index to a location in local storage (LS) memory. A synergistic processor unit (SPU) corresponding to the PPU translates the natural index to a first address in a core's memory, as well as translates the index alias to a second address in the core's memory. Responsive to the second address exceeding a physical memory size, the load store unit of the SPU truncates the second address to a usable range of address space in systems that do not map an address space. The second address and the first address point to the same physical location in the core's memory. In addition, the aliasing using index aliases also preserves the ability to combine persistent indices with relative indices without creating holes in a relative index map. | 2011-06-30 |
20110161619 | SYSTEMS AND METHODS IMPLEMENTING NON-SHARED PAGE TABLES FOR SHARING MEMORY RESOURCES MANAGED BY A MAIN OPERATING SYSTEM WITH ACCELERATOR DEVICES - Systems and methods are provided that utilize non-shared page tables to allow an accelerator device to share physical memory of a computer system that is managed by and operates under control of an operating system. The computer system can include a multi-core central processor unit. The accelerator device can be, for example, an isolated core processor device of the multi-core central processor unit that is sequestered for use independently of the operating system, or an external device that is communicatively coupled to the computer system. | 2011-06-30 |
20110161620 | SYSTEMS AND METHODS IMPLEMENTING SHARED PAGE TABLES FOR SHARING MEMORY RESOURCES MANAGED BY A MAIN OPERATING SYSTEM WITH ACCELERATOR DEVICES - Systems and methods are provided that utilize shared page tables to allow an accelerator device to share physical memory of a computer system that is managed by and operates under control of an operating system. The computer system can include a multi-core central processor unit. The accelerator device can be, for example, an isolated core processor device of the multi-core central processor unit that is sequestered for use independently of the operating system, or an external device that is communicatively coupled to the computer system. | 2011-06-30 |
20110161621 | MICRO-UPDATE ARCHITECTURE FOR ADDRESS TABLES - Methods of maintaining an address table for mapping logical addresses to physical addresses include continuously consolidating main address maps and an update address map, and periodically compacting the update address map. Consolidating includes selecting a main address map, reading valid mapping entries from the main and update address maps, constructing a mapping set including the valid mapping entries, and writing the mapping set to a second main address map. The update address map is compacted if a criterion is met, and includes copying the valid mapping entries to an unwritten block or metablock and assigning the unwritten block or metablock as a new update address map. The length of consolidation may depend on the average length of compacted mapping entries following a compaction operation. Increased performance due to lower maintenance overhead may result by using these methods. | 2011-06-30 |
20110161622 | MEMORY ACCESS CONTROL DEVICE, INTEGRATED CIRCUIT, MEMORY ACCESS CONTROL METHOD, AND DATA PROCESSING DEVICE - A memory access control unit is provided with a storage unit for storing a page table that stores a correspondence between a piece of data, a virtual page number, and a physical page number for all pages, and a conversion unit that includes a buffer for storing, for each of a subset of the pages, the virtual page number and the physical page number in correspondence, and a conversion processing unit operable to convert a virtual address into a physical address in accordance with content stored in the buffer. When the virtual page number of the virtual address included in the access request does not exist in the buffer, the conversion processing unit overwrites, in the buffer, (i) the virtual page number and the physical page number of a page for which a completed conversion count, indicating a number of times the virtual address of the page has been converted to the physical address, has reached a planned conversion count, with (ii) the virtual page number of the virtual address included in the access request and the physical page number corresponding, in the storage unit, to the virtual page number. | 2011-06-30 |
20110161623 | Data Parallel Function Call for Determining if Called Routine is Data Parallel - Mechanisms for performing data parallel function calls in code during runtime are provided. These mechanisms may operate to execute, in the processor, a portion of code having a data parallel function call to a target portion of code. The mechanisms may further operate to determine, at runtime by the processor, whether the target portion of code is a data parallel portion of code or a scalar portion of code and determine whether the calling code is data parallel code or scalar code. Moreover, the mechanisms may operate to execute the target portion of code based on the determination of whether the target portion of code is a data parallel portion of code or a scalar portion of code, and the determination of whether the calling code is data parallel code or scalar code. | 2011-06-30 |
20110161624 | Floating Point Collect and Operate - Mechanisms are provided for performing a floating point collect and operate for a summation across a vector for a dot product operation. A routing network placed before the single instruction multiple data (SIMD) unit allows the SIMD unit to perform a summation across a vector with a single stage of adders. The routing network routes the vector elements to the adders in a first cycle. The SIMD unit stores the results of the adders into a results vector register. The routing network routes the summation results from the results vector register to the adders in a second cycle. The SIMD unit then stores the results from the second cycle in the results vector register. | 2011-06-30 |
20110161625 | Interconnection network connecting operation-configurable nodes according to one or more levels of adjacency in multiple dimensions of communication in a multi-processor and a neural processor - A Wings array system for communicating between nodes using store and load instructions is described. Couplings between nodes are made according to a 1 to N adjacency of connections in each dimension of a G×H matrix of nodes, where G≧N and H≧N and N is a positive odd integer. Also, a 3D Wings neural network processor is described as a 3D G×H×K network of neurons, each neuron with an N×N×N array of synaptic weight values stored in coupled memory nodes, where G≧N, H≧N, K≧N, and N is determined from a 1 to N adjacency of connections used in the G×H×K network. Further, a hexagonal processor array is organized according to an INFORM coordinate system having axes at 60 degree spacing. Nodes communicate on row paths parallel to an FM dimension of communication, column paths parallel to an IO dimension of communication, and diagonal paths parallel to an NR dimension of communication. | 2011-06-30 |
20110161626 | ROUTING PACKETS IN ON-CHIP NETWORKS - Techniques for packet routing in an on-chip network are provided. In one embodiment, a method for routing packets in a multi-core processor including multiple cores connected by an on-chip network includes identifying ports that are incorrect while routing the packet. After receiving the packet at an input port, some of the ports are excluded from consideration while selecting the output port for the packet. The output port is selected from the remaining ports and the packet is routed to the selected output port. | 2011-06-30 |
20110161627 | MECHANISMS TO AVOID INEFFICIENT CORE HOPPING AND PROVIDE HARDWARE ASSISTED LOW-POWER STATE SELECTION - An apparatus and method is described herein for avoiding inefficient core hopping and providing hardware assisted power state selection. Future idle-activity of cores is predicted. If the residency of activity patterns for efficient core hop scenarios is predicted to be large enough, a core is determined to be efficient and allowed. However, if efficient activity patterns are not predicted to be resident for long enough—inefficient patterns are instead predicted to be resident for longer—then a core hop request is denied. As a result, designers may implement a policy for avoiding core hops that weighs the potential gain of the core hop, such as alleviation of a core hop condition, against a penalty for performing the core hop, such as a temporal penalty for the core hop. Separately, idle durations associated with hardware power states for cores may be predicted in hardware. Furthermore, accuracy of the idle duration prediction is determined. Upon receipt of a request for a core to enter a power state, a power management unit may select either the hardware predicted power state, if the accuracy is high enough, or utilize the requested power state, if the accuracy of the hardware prediction is not high enough. | 2011-06-30 |
20110161628 | DATA PROCESSING APPARATUS AND METHOD OF CONTROLLING RECONFIGURABLE CIRCUIT LAYER - According to one embodiment, a data processing apparatus includes plural reconfigurable circuit layers, a first memory, a selecting unit, and a configuring unit. In each of the plural reconfigurable circuit layers, a processing circuit can be reconfigured. The first memory stores circuit information representing processing circuits that should be configured. The selecting unit selects, if it is unnecessary to use all the plural reconfigurable circuit layers in order to configure the processing circuits represented by the circuit information, a part of the reconfigurable circuit layers having high priority orders set in advance and otherwise selects all the plural reconfigurable circuit layers. The configuring unit configures, using the selected reconfigurable circuit layers, the processing circuits represented by the circuit information stored in the first memory. | 2011-06-30 |
20110161629 | Arithmetic processor, information processor, and pipeline control method of arithmetic processor - An arithmetic processor includes a first pipeline unit configured to execute a first instruction that is input; a second pipeline unit configured to execute a second instruction that is input; a registration unit into which an aborted instruction is registered, the aborted instruction being the first instruction when the first pipeline unit is unable to complete the first instruction or the second instruction when the second pipeline unit is unable to complete the second instruction; a determination unit configured to make a determination as to which one of the first pipeline unit and the second pipeline unit is operating under a lower load; and an input unit configured to input, in the first pipeline unit or the second pipeline unit that is determined as operating under the lower load by the determination unit, the aborted instruction that is registered in the registration unit. | 2011-06-30 |
20110161630 | GENERAL PURPOSE HARDWARE TO REPLACE FAULTY CORE COMPONENTS THAT MAY ALSO PROVIDE ADDITIONAL PROCESSOR FUNCTIONALITY - An apparatus and method is described herein for replacing faulty core components. General purpose hardware is provided to replace core pipeline components, such as execution units. In the embodiment of execution unit replacement, a proxy unit is provided, such that mapping logic is able to map instruction/operations, which correspond to faulty execution units, to the proxy unit. As a result, the proxy unit is able to receive the operations, send them to general purpose hardware for execution, and subsequently write-back the execution results to a register file; it essentially replaces the defective execution unit allowing a processor with defective units to be sold or continue operation. | 2011-06-30 |
20110161631 | Arithmetic processing unit, information processing device, and control method - According to an aspect of an embodiment of the invention, an arithmetic processing unit includes a first cache memory unit that holds a part of data stored in a storage device; an address register that holds an address; a flag register that stores flag information; and a decoder that decodes a prefetch instruction for acquiring data stored at the address in the storage device. The arithmetic processing unit further includes an instruction execution unit that executes a cache hit check instruction instead of the prefetch instruction on the basis of a decoded result when the flag information is held, the cache hit check instruction allowing for searching the first cache memory unit with the address to thereby make a first cache hit determination that the first cache memory unit holds the data stored at the address in the storage device. | 2011-06-30 |
20110161632 | COMPILER ASSISTED LOW POWER AND HIGH PERFORMANCE LOAD HANDLING - A method and apparatus for handling low power and high performance loads is herein described. Software, such as a compiler, is utilized to identify producer loads, consumer reuse loads, and consumer forwarded loads. Based on the identification by software, hardware is able to direct performance of the load directly to a load value buffer, a store buffer, or a data cache. As a result, accesses to cache are reduced, through direct loading from load and store buffers, without sacrificing load performance. | 2011-06-30 |
20110161633 | Systems and Methods for Monitoring Out of Order Data Decoding - Various embodiments of the present invention provide systems and methods for monitoring out of order data decoding. For example, a method for monitoring out of order data processing is provided that includes receiving a plurality of data sets that is associated with a plurality of identifiers with each of the plurality of identifiers indicates a respective one of the plurality of data sets; storing each of the plurality of identifiers in a FIFO memory in an order that the corresponding data sets of the plurality of data sets was received; processing the plurality of data sets such that at least one of the plurality of data sets is provided as an output data set; accessing the next available identifier from the FIFO memory; and asserting an out of order signal when the next available identifier is not the same as the identifier associated with the output data set. | 2011-06-30 |
20110161634 | Processor, co-processor, information processing system, and method for controlling processor, co-processor, and information processing system - A processor includes a buffer that separates a sequence of instructions having no operand into segments and stores the segments, a data holder that holds data to be processed, a decoder that references the data and sequentially decodes at least one of the instructions from the top of the sequence, an instruction execution unit that executes the instruction, and an instruction sequence control unit that controls updating of the instruction sequence in accordance with the decoding result. When the decoded top instruction is a branch instruction and if a branch is taken, the instruction sequence control unit updates the sequence so that the top instruction of one of the segments is located at the top of the sequence. If a branch is not taken, the instruction sequence control unit updates the sequence so that an instruction immediately next to the branch instruction is located at the top of the sequence. | 2011-06-30 |
20110161635 | Rotate instructions that complete execution without reading carry flag - A method of one aspect may include receiving a rotate instruction. The rotate instruction may indicate a source operand and a rotate amount. A result may be stored in a destination operand indicated by the rotate instruction. The result may have the source operand rotated by the rotate amount. Execution of the rotate instruction may complete without reading a carry flag. | 2011-06-30 |
20110161636 | METHOD OF MANAGING POWER OF MULTI-CORE PROCESSOR, RECORDING MEDIUM STORING PROGRAM FOR PERFORMING THE SAME, AND MULTI-CORE PROCESSOR SYSTEM - Provided are a method of managing power of a multi-core processor, a recording medium storing a program for performing the method, and a multi-core processor system. The method of managing power of a multi-core processor having at least one core includes determining a parallel-processing section on the basis of information included in a parallel-processing program, collecting information for determining a clock frequency of the core in the determined parallel-processing section according to each core, and then determining the clock frequency of the core on the basis of the collected information. Accordingly, it is possible to minimize power consumption while ensuring quality of service (QoS). | 2011-06-30 |
20110161637 | APPARATUS AND METHOD FOR PARALLEL PROCESSING - An apparatus and method for parallel processing in consideration of degree of parallelism are provided. One of a task parallelism and a data parallelism is dynamically selected while a job is processed. In response to a task parallelism being selected, a sequential version code is allocated to a core or processor for processing a job. In response to a data parallelism being selected, a parallel version code is allocated to a core a processor for processing a job. | 2011-06-30 |
20110161638 | Ising Systems: Helical Band Geometry For DTC and Integration of DTC Into A Universal Quantum Computational Protocol - Disclosed herein are efficient geometries for dynamical topology changing (DTC), together with protocols to incorporate DTC into quantum computation. Given an Ising system, twisted depletion to implement a logical gate T, anyonic state teleportation into and out of the topology altering structure, and certain geometries of the (1,−2)-bands, a classical computer can be enabled to implement a quantum algorithm. | 2011-06-30 |
20110161639 | Event counter checkpointing and restoring - A method of one aspect may include storing an event count of an event counter that counts events that occur during execution within a logic device. The method may further include restoring the event counter to the stored event count after the event counter has counted additional events. Other methods are also disclosed. Apparatus, systems, and machine-readable medium having software are also disclosed. | 2011-06-30 |
20110161640 | APPARATUS AND METHOD FOR CONFIGURABLE PROCESSING - A configurable execution unit comprises operators capable of being dynamically configured by an instruction at the level of processing multi-bit operand values. The unit comprises one or more dynamically configurable operator modules, the or each module being connectable to receive input operands indicated in an instruction, and a programmable lookup table connectable to receive dynamic configuration information determined from an opcode portion of the instruction and capable of generating operator configuration settings defining an aspect of the function or behavior of a configurable operator module, responsive to the dynamic configuration information in the instruction. | 2011-06-30 |
20110161641 | SPE Software Instruction Cache - An application thread executes a direct branch instruction that is stored in an instruction cache line. Upon execution, the direct branch instruction branches to a branch descriptor that is also stored in the instruction cache line. The branch descriptor includes a trampoline branch instruction and a target instruction space address. Next, the trampoline branch instruction sends a branch descriptor pointer, which points to the branch descriptor, to an instruction cache manager. The instruction cache manager extracts the target instruction space address from the branch descriptor, and executes a target instruction corresponding to the target instruction space address. In one embodiment, the instruction cache manager generates a target local store address by masking off a portion of bits included in the target instruction space address. In turn, the application thread executes the target instruction located at the target local store address accordingly. | 2011-06-30 |
20110161642 | Parallel Execution Unit that Extracts Data Parallelism at Runtime - Mechanisms for extracting data dependencies during runtime are provided. With these mechanisms, a portion of code having a loop is executed. A first parallel execution group is generated for the loop, the group comprising a subset of iterations of the loop less than a total number of iterations of the loop. The first parallel execution group is executed by executing each iteration in parallel. Store data for iterations are stored in corresponding store caches of the processor. Dependency checking logic of the processor determines, for each iteration, whether the iteration has a data dependence. Only the store data for stores where there was no data dependence determined are committed to memory. | 2011-06-30 |
20110161643 | Runtime Extraction of Data Parallelism - Mechanisms for extracting data dependencies during runtime are provided. The mechanisms execute a portion of code having a loop and generate, for the loop, a first parallel execution group comprising a subset of iterations of the loop less than a total number of iterations of the loop. The mechanisms further execute the first parallel execution group and determining, for each iteration in the subset of iterations, whether the iteration has a data dependence. Moreover, the mechanisms commit store data to system memory only for stores performed by iterations in the subset of iterations for which no data dependence is determined. Store data of stores performed by iterations in the subset of iterations for which a data dependence is determined is not committed to the system memory. | 2011-06-30 |
20110161644 | INFORMATION PROCESSOR - When a plurality of OSs are mounted, it is desirable to efficiently use memory resources without affecting other OSs. Also, even if the OSs are different from each other, they are mounted on one system, and therefore, inter-OS communication is required. In this case, data communication without affecting other OSs is required. Accordingly, an information processor includes: a firmware for assigning a first central processing unit, a first operating system, and a first region being a partial region of a memory as a first domain, assigning a second central processing unit, a second operating system, and a second region being a partial region of the memory as a second domain, and controlling to disable an access of one domain to a region assigned for the other domain; and a middleware for controlling a communication when the data communication is required between the first domain and the second domain. Further, when a sharable code is available in the operating systems, the code is stored in a region of the memory to which only a read access of each domain is enabled. Still further, when the communication is executed between the domains, with a state that the access to the memory region for the communication is limited by the middleware and the firmware, each domain accesses the region. | 2011-06-30 |
20110161645 | CONTENT SECURING SYSTEM - In a method for securing content in a system containing a security processor configured to control access to the content by a main processor, in which main processor being configured to send heartbeats to the security processor, a determination as to whether at least one heartbeat was received within a predicted time interval is made and in response to a determination that at least one heartbeat was not received with the predicted time interval, access to the content by the main processor is ceased. | 2011-06-30 |
20110161646 | Method for performing quick boot and general boot at bios stage - A method for performing a quick boot and a general boot at a basic input output system (BIOS) stage is described. A computer is powered on. An embedded controller firmware or a BIOS determines whether a quick boot key is pressed. If the quick boot key is not pressed, a boot flag is changed from Quick Boot to General Boot. If the quick boot key is pressed, the BIOS determines whether the boot flag is set to Quick Boot. If it is determined that the boot flag is set to Quick Boot, an initialization of drivers preset by the quick boot is performed, and uninitialized drivers are initialized at a stage when an operating system is started. If it is determined that the boot flag is set to General Boot, an initialization of all drivers is performed. | 2011-06-30 |
20110161647 | BOOTABLE VOLATILE MEMORY DEVICE, MEMORY MODULE AND PROCESSING SYSTEM COMPRISING BOOTABLE VOLATILE MEMORY DEVICE, AND METHOD OF BOOTING PROCESSING SYSTEM USING BOOTABLE VOLATILE MEMORY DEVICE - A bootable volatile memory device comprises a volatile memory area configured to be written to and read from by a host processor, a boot code area configured to store bootstrap code before a boot procedure is performed by the host processor, a first chip select terminal configured to output a signal used as a chip select signal where the host processor performs the boot procedure by reading the bootstrap code from the boot code area, and a second chip select terminal configured to output a signal used as a chip select signal where the host processor writes and reads data to and from the volatile memory area. | 2011-06-30 |
20110161648 | SOFTWARE LOADING METHOD AND APPARATUS - A method and an apparatus that enable loading of computer programs to a trusted computing platform. The computer program loading is enabled by executing a first program loader ( | 2011-06-30 |
20110161649 | SYSTEMS AND METHODS FOR BOOTING A BOOTABLE VIRTUAL STORAGE APPLIANCE ON A VIRTUALIZED SERVER PLATFORM - One embodiment is a method for booting a bootable virtual storage appliance on a virtualized server platform. One such method comprises: providing a virtual storage appliance on a server platform, the virtual storage appliance configured to manage a disk array comprising a plurality of disks, and wherein at least one of the disks comprises a hidden boot partition having a boot console; powering up the server platform; loading boot code on the server platform; loading the boot console from the hidden boot partition; and the boot console loading boot components for a virtualization environment. | 2011-06-30 |
20110161650 | PROCESSOR SYSTEM - An electronic circuit includes a more-secure processor having hardware based security for storing data. A less-secure processor eventually utilizes the data. By a data transfer request-response arrangement between the more-secure processor and the less-secure processor, the more-secure processor confers greater security of the data on the less-secure processor. A manufacturing process makes a handheld device having a storage space, a less-secure processor for executing modem software and a more-secure processor having a protected application and a secure storage. A manufacturing process involves generating a per-device private key and public key pair, storing the private key in a secure storage where it can be accessed by the protected application, combining the public key with the modem software to produce a combined software, signing the combined software; and storing the signed combined software into the storage space. Other processes of manufacture, processes of operation, circuits, devices, wireless and wireline communications products, wireless handsets and systems are disclosed and claimed. | 2011-06-30 |
20110161651 | DETERMINING ELECTRICAL COMPATIBILITY AND/OR CONFIGURATION OF DEVICES IN A PRE-BOOT ENVIRONMENT - In at least some embodiments, a system comprises a plurality of electrical devices and management logic coupled to the electrical devices. While the electrical devices are each in a pre-boot environment, the management logic obtains information from the electrical devices and uses the information to determine electrical compatibility of, and/or configure, the electrical devices. | 2011-06-30 |
20110161652 | SYSTEM, APPARATUS, AND METHOD FOR INHIBITING OPERATION THAT MODIFIES PROGRAM CONFIGURATION - An operation inhibiting system includes an image forming apparatus in which programs are installed and an operation inhibition information providing apparatus, wherein the image forming apparatus includes a configuration information storing unit to store configuration information about the installed programs, an operation inhibition information acquiring unit to transmit the configuration information to the operation inhibition information providing apparatus, and to receive operation inhibition information that is transmitted from the operation inhibition information providing apparatus in response to the configuration information, the operation inhibition information indicating on a program-specific basis whether an operation to modify a configuration of an installed program is allowed, and an operation unit to inhibit the operation on the program based on the received operation inhibition information, wherein the operation inhibition information providing apparatus includes a unit that transmits the operation inhibition information responsive to the configuration information upon receiving the configuration information. | 2011-06-30 |
20110161653 | Logical Partition Media Access Control Impostor Detector - Provided are techniques for to enable a virtual input/output server (VIOS) to establish cryptographically secure signals with target LPARs to detect an imposter or spoofing LPAR. The secure signal, or “heartbeat,” may be configured as an Internet Key Exchange/Internet Protocol Security (IKE/IPSec) encapsulated packet (ESP) connection or tunnel. Within the tunnel, the VIOS pings each target LPAR and, if a heartbeat is interrupted, the VIOS makes a determination as to whether the tunnel is broken, the corresponding LPAR is down or a media access control (MAC) spoofing attach is occurring. The determination is made by sending a heartbeat that is designed to fail unless the heartbeat is received by a spoofing device. | 2011-06-30 |
20110161654 | PEER-TO-PEER TELEPHONY RECORDING - System and method for recording communication sessions in a peer-to-peer communication networks. End-devices of the peer to peer communication network may register with a selected super-node that may fork media to a recording system for recording. Communication sessions arriving at a call center may be transferred between the external end-device and the target agent end-device via a recorder and the communication session media may be recorded. Alternatively, a conference call may be established between an external end-device, a target agent end-device of a call center and a recorder over a peer-to-peer communication network. After the conference call is established, the recorder may receive media transferred between the external end-device and the target agent end-device and record that media. | 2011-06-30 |
20110161655 | DATA ENCRYPTION PARAMETER DISPERSAL - A method begins with a processing module obtaining encoded key slices from a plurality of user devices and decoding a threshold number of the encoded key slices utilizing a first error coding dispersal storage function to produce a key when the threshold number of the encoded key slices has been obtained. The method continues with the processing module receiving encoded data slices and decoding a threshold number of encoded data slices utilizing a second error coding dispersal storage function to produce encrypted data when the threshold number of the encoded data slices has been received. The method continues with the processing module decrypting the encrypted data utilizing the key and an encryption function to produce data. | 2011-06-30 |
20110161656 | SYSTEM AND METHOD FOR PROVIDING DATA SECURITY IN A HOSTED SERVICE SYSTEM - Aspects of the present disclosure are directed to methods and systems for protecting sensitive data in a hosted service system. The system includes a host system and the host system includes a key management system (KMS) and a metadata service system (MSS). The KMS and the MSS are communicatively coupled to each other. The system further includes a database management system (DBMS) having a database, a query pre-parser, and a results handler. The query pre-parser and the results handler are communicatively coupled to the KMS and the MSS, and the system also includes a processing application adapted to process at least some data received from a tenant system. | 2011-06-30 |
20110161657 | METHOD AND SYSTEM FOR PROVIDING TRAFFIC HASHING AND NETWORK LEVEL SECURITY - An approach is provided for enabling traffic hashing and network level security. A unit of transmission associated with a flow of network traffic is received at a routing node. The unit of transmission is encrypted. A pseudo-address to assign to the encrypted unit of transmission is determined. The pseudo-address is assigned to the encrypted unit of transmission. | 2011-06-30 |
20110161658 | METHOD FOR ENABLING LIMITATION OF SERVICE ACCESS - A method for enabling limitation of service access, wherein a service provider offers at least one service and a user possesses multiple different digital identities that can be used to invoke or register with the service, access to the service requiring an account at a third party entity, the user registers his digital identities with the account and agrees on a secret with the third party entity, the method including:
| 2011-06-30 |
20110161659 | METHOD TO ENABLE SECURE SELF-PROVISIONING OF SUBSCRIBER UNITS IN A COMMUNICATION SYSTEM - A method to enable remote, secure, self-provisioning of a subscriber unit includes, a security provisioning server: receiving, from a subscriber unit, a certificate signing request having subscriber unit configuration trigger data; generating provisioning data for the subscriber unit using the subscriber unit configuration trigger data; and in response to the certificate signing request, providing to the subscriber unit the provisioning data and a subscriber unit certificate having authorization attributes associated with the provisioning data, to enable the self-provisioning of the subscriber unit. | 2011-06-30 |
20110161660 | TEMPORARY REGISTRATION OF DEVICES - In a method of temporarily registering a second device with a first device, in which the first device includes a temporary registration mode, the temporary registration mode in the first device is activated, a temporary registration operation in the first device is initiated from the second device, a determination as to whether the second device is authorized to register with the first device is made, and the second device is temporarily registered with the first device in response to a determination that the second device is authorized to register with the first device, in which the temporary registration requires that at least one of the second device and the first device delete information required for the temporary registration following at least one of a determination of a network connection between the first device and the second device and a powering off of at least one of the first device and the second device. | 2011-06-30 |