16th week of 2015 patent applcation highlights part 55 |
Patent application number | Title | Published |
20150106501 | FACILITATING HIGH QUALITY NETWORK DELIVERY OF CONTENT OVER A NETWORK - Providing for improved efficiency in delivery of content over a network is described herein. By way of example, a metric of communication related to electronic communication between a device and a network access point can be obtained and utilized to calculate or infer a resource load associated with delivering the content to the device. If the metric of communication indicates a resource load that exceeds a predetermined measure, a message can be sent to a content server originating the provisioning of content for the device. In particular aspects, the message can instruct the content server to reduce a resource-impacting characteristic of the content, or transmission of the content. The metric of communication can continue to be monitored, and the change to the resource-impacting characteristic can be maintained or revoked based on subsequent indications of the metric of communication. | 2015-04-16 |
20150106502 | DYNAMIC ASSIGNMENT OF CONNECTION PRIORITIES FOR APPLICATIONS OPERATING ON A CLIENT DEVICE - Providing for prioritization of applications operating on a client device with respect to access to online content or services is described herein. By way of example, the prioritizing can be correlated with allocation of network resources for respective applications. An application having higher priority can be allocated a larger amount of resources, a guaranteed amount of resources, a guaranteed quality of service, first access to resources, or the like. Likewise, an application with lower priority can be allocated a lower amount of resources, have best effort resources, access to residual resources, and so forth. In various embodiments, applications can be prioritized based on a status of the application with respect to the client device. As one illustrative example, applications actively receiving content, responding to user commands, or maximized or displayed on a graphic display can be afforded higher priority, whereas inactive or minimized applications can be afforded a lower priority. | 2015-04-16 |
20150106503 | PREDICTIVE CLOUD PROVISIONING BASED ON HUMAN BEHAVIORS AND HEURISTICS - Embodiments relate to predictively provisioning cloud resources based on human behaviors and heuristics. An aspect includes monitoring a collection of events relating to a customer application as well as monitoring an infrastructure load on resources for the customer application. A causal relationship is evaluated between an event and the infrastructure load. A predictive rule is then constructed based on the causal relationship. Resource requirements are anticipated based on the predictive rule and a provisioning of resources in a service domain is requested for the anticipated resource requirements. | 2015-04-16 |
20150106504 | SECURE CLOUD MANAGEMENT AGENT - A system for providing a secure management agent for high-availability continuity for cloud systems includes a computer processor and logic executable by the computer processor. The logic is configured to implement a method. The method includes receiving operating parameters and threshold settings for a plurality of computing clouds. Secure relationships are established with the plurality of computing clouds based on the operating parameters. Data is mirrored across the plurality of computing clouds. Threshold data is then monitored for the plurality of computing clouds to maintain a continuity of resources for the plurality of computing clouds. | 2015-04-16 |
20150106505 | METHODS AND APPARATUS TO MEASURE EXPOSURE TO STREAMING MEDIA - Methods and apparatus to measure exposure to streaming media are disclosed. An example method includes detecting an ID3 tag associated with streaming media presented at a client device. A first request is sent from the client device to a first internet domain, the first request identifying the streaming media. A redirection message is received from the first internet domain. In response to the redirection message, a second request is sent to a second internet domain specified by the redirection message. A cookie is provided identifying the client device to the second internet domain. | 2015-04-16 |
20150106506 | HUMAN-MACHINE INTERFACE (HMI) SYSTEM HAVING ELEMENTS WITH AGGREGATED ALARMS - A system manages human machine interface (HMI) applications for industrial control and automation. Software instructions stored on a tangible, non-transitory media and executable by a processor receive data indicative of a manufacturing/process control system being monitored and display a user interface indicative of a status of the manufacturing/process control system being monitored wherein the status is based on the received data. | 2015-04-16 |
20150106507 | SELECTION SYSTEM, SELECTION SERVER, SELECTION METHOD, AND COMPUTER READABLE MEDIUM - A selection system includes an acquiring unit, a candidate selecting unit, and a product selecting unit. The acquiring unit acquires device information of multiple devices. The candidate selecting unit selects, from among the multiple devices, a device whose device information does not meet a predetermined criterion, as a candidate for a device to be replaced. The product selecting unit selects, based on a selection result by the candidate selecting unit and product information regarding multiple products that meet the predetermined criterion, a device to be replaced, from among candidates selected, and a replacement product with which the device is to be replaced. | 2015-04-16 |
20150106508 | METHOD AND DEVICE FOR COMMISSIONING OF NODES OF A NETWORK - The present invention provides a method for commissioning of nodes of a network. The method comprises the steps of (S | 2015-04-16 |
20150106509 | METHOD FOR REPRESENTING USAGE AMOUNT OF MONITORING RESOURCE, COMPUTING DEVICE, AND RECORDING MEDIUM HAVING PROGRAM RECORDED THEREON FOR EXECUTING THEREOF - A method of representing a usage of a monitoring resource includes designating monitoring target processes based on a weight file including a resource weight assigned according to resource importance for each of at least one process, minimum and maximum values of a corresponding resource weight and a resource identifier, applying the resource weight to resource items including CPU, memory and I/O usage rates, the resource items influencing on each of the monitoring target processes and visually representing each of the monitoring target processes according to the applied resource weight by a user request or periodically. Therefore, this application may apply a resource weight according to a resource importance being used by each of processes so that a user may check and actively deal with a system status. | 2015-04-16 |
20150106510 | SECURED SEARCH - A method for estimating web traffic to a website is disclosed. The method may include obtaining a first set of reporting information from a secured external source that directs traffic to the website. The first set of reporting information may have a corresponding portion of reporting information which is not provided from the secured external source. The method may include obtaining a second set of reporting information from a unsecured external source that directs traffic to the website. The second set of reporting information may be different than the first set of reporting information. The method may also include generating an estimation of the corresponding portion of reporting information which is not provided from the secured external source by correlating the second set of reporting information with the first set of reporting information. | 2015-04-16 |
20150106511 | SECURE CLOUD MANAGEMENT AGENT - A method for providing a secure management agent for high-availability continuity for cloud systems includes receiving operating parameters and threshold settings for a plurality of computing clouds. Secure relationships are established with the plurality of computing clouds based on the operating parameters. Data is mirrored across the plurality of computing clouds. Threshold data is then monitored for the plurality of computing clouds to maintain a continuity of resources for the plurality of computing clouds. | 2015-04-16 |
20150106512 | PREDICTIVE CLOUD PROVISIONING BASED ON HUMAN BEHAVIORS AND HEURISTICS - A method of predictively provisioning cloud resources based on human behaviors and heuristics includes monitoring a collection of events relating to a customer application as well as monitoring an infrastructure load on resources for the customer application. A causal relationship is evaluated between an event and the infrastructure load. A predictive rule is then constructed based on the causal relationship. Resource requirements are anticipated based on the predictive rule and a provisioning of resources in a service domain is requested for the anticipated resource requirements. | 2015-04-16 |
20150106513 | SYSTEM AND METHOD FOR OPERATING NETWORK TRAFFIC REDUCTION POLICY IN OVERLOADED AREA - A system for operating network traffic reduction policy in an overloaded area includes a storage medium for storing policy agent identification information disposed at a wireless terminal apparatus in response to the wireless terminal apparatus identification information; a communication unit for receiving the wireless terminal apparatus identification information positioned at an overloaded area from a communication server; a confirmation unit for confirming the policy agent identification information disposed at the wireless terminal apparatus corresponding to the received wireless terminal apparatus identification information by the storage medium; and a processing unit for transmitting network usage cutoff policy information included with a timer for clearing network usage cutoff to policy agents corresponding to the policy agent identification information confirmed by the confirmation unit. | 2015-04-16 |
20150106514 | Methods and Systems for Network Connectivity - Methods and systems are provided for connecting an electronic device to a network. In some situations, the electronic device connects to a first network provider and pings a first server having a static internet protocol address and a second server having a dedicated uniform resource locator. If the electronic device receives a response from the first and second server, the electronic device maintains its connection to the first network provider. Otherwise, the electronic device connects to a second network provider and pings the first and second servers. | 2015-04-16 |
20150106515 | Bandwidth Measurement - Methods for testing network bandwidth availability in a non-intrusive manner. By implementing occasional, base-line bandwidth testing, a more accurate indication of actual transfer rate results. When an application dependent upon network bandwidth is first executed, a series of file transfers takes place utilizing a series of different sized pieces of content. | 2015-04-16 |
20150106516 | PROVIDING A WITNESS SERVICE - Described are embodiments directed at providing a witness service that sends notifications with a resource state to clients. Embodiments provide a protocol that includes various messages for registering and receiving notifications regarding the state of a resource. The protocol may include a message for requesting node information from a first node in a cluster. The node information identifies nodes in the cluster that provide a witness service, which monitors a resource. The protocol includes a message that is used to register with the witness service for notifications regarding a state, or state change, of a network or cluster resource. The protocol also includes messages for sending notifications with state information of the resource. | 2015-04-16 |
20150106517 | SYSTEM AND METHOD FOR DELAYED DEVICE REGISTRATION ON A NETWORK - Systems and methods for enabling a computing device to be registered and authorized for network access, while deferring device hardware address capture until a later time. Subsequently, when the computing device connects to a network location at which the hardware address can be detected registration and authorization can be fully completed. In some cases, the subsequent completion can be performed automatically and without user intervention. | 2015-04-16 |
20150106518 | MANAGING VIRTUAL NETWORK PORTS - Managing virtual network ports on a physical server to provide a virtual server access to a group of storage resources through a network. A storage access group representing a group of storage resources is generated. A virtual server is generated on a hypervisor executed on the physical server. Access to the network is activated for the virtual server. A management console is provided for creating and managing the storage access group providing access to the group of storage resources for the virtual server from one or more physical servers. The management console includes a virtual server management facility and a storage access group facility. The virtual server management facility allows for managing virtual server definitions and activating, deactivating, and migrating virtual servers. The storage access group facility allows for managing virtual network port descriptions, administrating network port names, and creating, activating and deactivating virtual network ports. | 2015-04-16 |
20150106519 | SELECTIVE MULTIPLE-MEDIA ACCESS CONTROL - A communication system and method includes receiving payload data of first and second media access control (MAC) frames. A MAC-level protocol is identified in response to the indication of the selected network for each of the first and second MAC frames. The payload data of the first and second MAC frames is transmitted and/or received across respective networks transmitted using, for example, power line communications signals over a common communications medium. The common communications medium is operable for carrying signals of a plurality of networks. | 2015-04-16 |
20150106520 | Efficient Provisioning & Deployment of Virtual Machines - Machines, systems and methods for managing quality of service in a virtualized computing environment, the method comprising: provisioning one or more active virtual machines (VMs) over one or more hosts in a virtualized computing network, wherein one or more resources are allocated to the active VMs before the active VMs service one or more requests; monitoring information associated with quality of service defined for servicing of the requests; and designating at least an active VM as a shadow VMs, in response to results of the monitoring, wherein at least one resource remains allocated to the shadow VM, while the shadow VM enters a dormant state and no longer services any requests. | 2015-04-16 |
20150106521 | PLUGGABLE CLOUD ENABLEMENT BOOT DEVICE AND METHOD - A pluggable cloud enablement boot device (PCEBD) is a bootable device that includes all information needed to automatically provision hardware and software to create a computing solution that meets customer requirements. This allows for quickly deploying a computing solution in a manner that eliminates many manual steps that are typically performed today. The PCEBD uses firmware to verify a given platform has sufficient resources to deploy the PCEBD. The computing solution, once provisioned and running, can be modified, and these modifications may be reflected in the definition of the PCEBD. In addition, a computing solution may include multiple resources provisioned from multiple PCEBDs, which can be packaged into a PCEBD that will include other PCEBDs. The result is a way to deploy computing solutions that is much more efficient than the manual methods used in the prior art. | 2015-04-16 |
20150106522 | SELECTING A TARGET SERVER FOR A WORKLOAD WITH A LOWEST ADJUSTED COST BASED ON COMPONENT VALUES - If a first workload is supported by candidate servers with different architectures, a determination is made that a selected workload is the first workload. If the first workload is not supported by candidate servers with the different architectures, a determination is made that the selected workload is a second workload. Components of the candidate servers are determined, and statistics are collected, and component values are determined. If the components impact performance of the selected workload, weights are set for the components to be a percentage impact of the components on the selected workload. If the components do not impact performance, weights are set to be one. Functions of the component values and the weights are calculated. The results of the functions are processed with costs of the candidate servers to yield adjusted costs. The selected workload is moved to the candidate server with a lowest adjusted cost. | 2015-04-16 |
20150106523 | DISTRIBUTED GLOBAL LOAD-BALANCING SYSTEM FOR SOFTWARE-DEFINED DATA CENTERS - The disclosure herein describes a system for providing distributed global server load balancing (GSLB) over resources across multiple data centers. The system includes a directory group comprising one or more directory nodes and a plurality of GSLB nodes registered to the directory group. A respective GSLB node is configured to provide GSLB services over a respective portion of the resources. A directory node includes a domain name system (DNS) query-receiving module configured to receive a DNS query from a client, a node-selecting module configured to select from the plurality of GSLB nodes a first GSLB node based at least on the DNS query, and a DNS query-responding module configured to respond to the DNS query to the client using an address of the selected first GSLB node, thereby facilitating the selected first GSLB node in performing GSLB while resolving the DNS query. | 2015-04-16 |
20150106524 | METHOD OF FILTERING APPLICATIONS - A method of configuring a graphical user interface in a computing device, the device comprising a collection of applications ( | 2015-04-16 |
20150106525 | DISTRIBUTION OF APPLICATIONS OVER A DISPERSED NETWORK - Disclosed are various embodiments for facilitating anticipatory distribution of applications to a network of remote hosts. A demand for each of the applications is calculated. Based on criteria within the demand and computing resources available, remote hosts are selected to receive the applications. Transmissions of the applications to the selected remote hosts are scheduled and monitored for completion according to the schedule. | 2015-04-16 |
20150106526 | PROVISIONING A NETWORK FOR NETWORK TRAFFIC - A network system comprising a software-defined network (SDN) controller and an application program interface (API) communicatively coupled to an application and the SDN controller in which data is provided from the API to the SDN controller, the data comprising information regarding the application session characteristics associated with a new session to be initiated on the network. A method of provisioning a network for network traffic comprising receiving data at a software-defined network (SDN) controller from an application program interface (API) describing application information associated with a session to be initiated on the network from an end-point device associated with a number of nodes in the network, and providing the API with real-time data describing available bandwidth on the network that the application may use. | 2015-04-16 |
20150106527 | SYSTEM AND METHOD TO CORRELATE LOCAL MEDIA URIs BETWEEN WEB BROWSERS - Various disclosed embodiments include methods and systems for correlating local media uniform resource identifiers (URIs) between a first web browser of a first user device and a second web browser of a second user device. The method comprises establishing a session between the first web browser, the second web browser, and a server. The method comprises performing an action related to a first URI on the first web browser. The method comprises encoding the performed action as a resource description framework (RDF) graph including the first URI and sending the RDF graph to the server. The method comprises translating the received RDF graph to a second RDF graph including a second URI based on a predicate stored in the server. The method comprises sending the second RDF graph to the second browser. | 2015-04-16 |
20150106528 | COMMUNICATION OF DATA OF A WEB REAL-TIME COMMUNICATION VIA A CARRIER-GRADE ENVIRONMENT - A method, a device, and a non-transitory storage medium having instructions to establish a web connection with a user device and provide access to a carrier-grade network in support of a Web Real Time Communication (WebRTC) session; obtain service data that includes data pertaining to a user of the user device; assign a level of trustworthiness to the service data; generate a message, wherein the message includes a request to initiate the WebRTC session; package the service data in the message based on the level of trustworthiness; and transmit the message to another device. | 2015-04-16 |
20150106529 | TERMINAL APPARATUS AND METHOD FOR CONNECTING TO VIRTUAL SERVER IN VIRTUAL DESKTOP INFRASTRUCTURE - A terminal apparatus and a method for connecting to a virtual server in a virtual desktop infrastructure (VDI) are disclosed. A control method of a terminal apparatus which uses a virtual machine (VM) of a virtual server in a VDI includes: receiving input of VDI connection information to connect to the virtual server; connecting to the virtual server based on the VDI connection information; determining whether or not a predetermined event occurs in a state in which the terminal apparatus is connected to the virtual server and entering a standby mode; and, when a user command to enter an activation mode is received in the standby mode, reconnecting to the virtual server based on the VDI connection information. Accordingly, when the terminal apparatus converts from the standby mode to the activation mode in the VDI, the terminal apparatus can easily reconnect to the disconnected virtual server. | 2015-04-16 |
20150106530 | Communication Efficiency - There is provided a solution in which a wireless node is caused to configure a plurality of transport layer protocol streams for a communication with another node via at least one communication path, wherein each transport layer protocol stream has a different maximum segment size; monitor at least one performance parameter of each communication path between the wireless node and the other node; and select at least one transport layer protocol stream for the communication on the basis of the monitoring. | 2015-04-16 |
20150106531 | MULTICAST OF STREAM SELECTION FROM PORTABLE DEVICE - To view media, a user may select a media stream by operating a portable device that controls a media presentation device. The portable device may be configured to multicast this stream selection to both the media presentation device and a selection analysis machine. The remote control may have or include both an infrared emitter and a cellular telephone, and the stream selection may be sent both to the media presentation device and to the selection analysis machine. The selection analysis machine may receive and store stream selections over a period of time, and these aggregated stream selections may form all or part of a profile of a user or a group of users who use the media presentation device. This profile may indicate viewing habits and choices of one or more users of the media presentation device, and the selection analysis machine may analyze this profile. | 2015-04-16 |
20150106532 | TECHNIQUES FOR STORAGE CONTROLLER QUALITY OF SERVICE MANAGEMENT - A technique for managing a data network includes monitoring data transfer rates and data transfer thresholds for data transferred between storage and an application. Feedback on the suitability of the data transfer rate is collected from the application. A data transfer threshold for the application is changed based on the monitored data transfer rate and the collected feedback. | 2015-04-16 |
20150106533 | COMMUNICATION APPARATUS, METHOD FOR CONTROLLING COMMUNICATION APPARATUS, AND STORAGE MEDIUM - A communication apparatus comprises a first assignment unit configured to, with respect to a first other communication apparatus that is connected to a wireless network created by the communication apparatus, assign an address based on a first address assignment method; a second assignment unit configured to assign an address to the first other communication apparatus based on a second address assignment method that is different from the first address assignment method; and a control unit configured to perform control such that activation of the first assignment unit is prevented if an address has been assigned to the first other communication apparatus by the second assignment unit. | 2015-04-16 |
20150106534 | METHOD, A COMPUTER PROGRAM PRODUCT, AND A CARRIER FOR INDICATING ONE-WAY LATENCY IN A DATA NETWORK - Disclosed herein is a method, a computer program product, and a carrier for indicating one-way latency in a data network (N) between a first node (A) and a second node (B), wherein the data network (N) lacks continuous clock synchronization, comprising: a pre-synchronisation step, a measuring step, a post-synchronisation step, an interpolation step, and generating a latency profile. The present invention also relates to a computer program product incorporating the method, a carrier comprising the computer program product, and a method for indicating server functionality based on the first aspect. | 2015-04-16 |
20150106535 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing apparatus including a device detection part configured to detect a second execution device that is identical or similar to a first execution device which executes a command, and an execution control part configured to perform control in a manner that the command is executed by the second execution device detected by the device detection part. | 2015-04-16 |
20150106536 | POWER-OVER-ETHERNET POWERED UNIVERSAL SERIAL BUS CHARGING PORT - A power conversion device is configured to convert power-over-Ethernet (PoE) power to universal serial bus (USB) power to yield a USB charging port. The conversion device can conform to a number of modular and/or portable form factors, allowing existing Ethernet data ports to be easily converted to USB charging ports. Embodiments include a modular conversion device configured to mount in a window of an existing wall plate as a replacement for an unused Ethernet data port, and a portable conversion device that can be plugged into an existing Ethernet data port. The conversion device receives PoE power from the Ethernet network, converts the PoE power to an appropriate USB standard, and delivers the power to an integrated USB charging port. | 2015-04-16 |
20150106537 | METHOD OF CONTROLLING DATA COMMUNICATION - A method of controlling the data communication in a communications network having a central data server provided data through multiple data queues. The data arriving at the central data server may be stored in each of the multiple data queues. The data in the multiple data queues may then be supplied to the central data server based on a predetermined schedule. | 2015-04-16 |
20150106538 | RECEIVER ARCHITECTURE FOR MEMORY READS - A receiver architecture for memory reads is described herein. In one embodiment, a memory interface comprises a plurality of transmitters, wherein each of the plurality of transmitters is configured to transmit data to a memory device over a respective one of a plurality of I/O channels. The memory interface also comprises a plurality of receivers, wherein each of the plurality of receivers is coupled to a respective one of the plurality of transmitters, and is configured to receive data from the memory device over the respective one of the plurality of I/O channels. The plurality of receivers are grouped together into a receiver subsystem that is located away from the plurality of transmitters. | 2015-04-16 |
20150106539 | COMMUNICATION CONTROL PINS IN A DUAL ROW CONNECTOR - Methods and apparatus, including computer program products, are provided for communications control in a dual row connector. In one aspect there is provided a method. The method may include coupling a first data connector including a pair of communication control pins and another pair of communication control pins, wherein the pair further comprises a first communication control pin located at a first row of the first data connector and a second communication control pin located at a second row of the data connector, wherein the other pair further comprises a third communication control pin located at the first row of the first data connector and a fourth communication control pin located at the second row of the first data connector. Related apparatus, systems, methods, and articles are also described. | 2015-04-16 |
20150106540 | DEVICE, METHOD AND COMPUTER PROGRAM FOR OPERATING A DATA BUS SYSTEM OF A MOTOR VEHICLE - An apparatus for operating a data bus system of a motor vehicle having data bus segments, at least one of the data bus segments is designed to switch from an active state to a rest state and vice versa. In a first step, a communication requirement is detected of a first control device of a first data bus segment in the rest state. If a communication requirement of the first control device is detected, the first data bus segment is brought from the rest state into the active state. If a communication requirement of the first control device with a second control device outside of the first data bus segment is detected, all other data bus segments of the data bus system outside of the first data bus segment that are in the rest state are additionally activated across the board. | 2015-04-16 |
20150106541 | AUTO-CONFIGURATION OF DEVICES BASED UPON CONFIGURATION OF SERIAL INPUT PINS AND SUPPLY - A device includes a memory, at least two input/output (IO) pins, and slave identifier (ID) selection circuitry. The memory stores a slave ID, which identifies the device to other devices in a serial communication process. The slave ID selection circuitry changes the stored slave ID based on which one of the IO pins is coupled to a supply voltage. By changing the slave ID of the device based on which one of the IO pins is coupled to a supply voltage, a number of devices with otherwise identical slave IDs may change their slave IDs in order to participate in a serial communication process on the same bus. Further, the slave ID of the device may be changed without using an additional IO pin on the device. | 2015-04-16 |
20150106542 | LOCK MANAGEMENT SYSTEM, LOCK MANAGEMENT METHOD AND LOCK MANAGEMENT PROGRAM - Provided is a lock management system, a lock management method and a lock management program whereby lock acquisition and release processes can be carried out at high speed. | 2015-04-16 |
20150106543 | System and Method for Processing Device with Differentiated Execution Mode - In accordance with an embodiment of the present invention, a method of operating a system includes operating in a first operating mode to not permit access to an address range, receiving a priority interrupt (PI) signal. The method further includes operating in a second operating mode to permit access to the address range in response to receiving the PI signal. | 2015-04-16 |
20150106544 | CONNECTOR INTERFACE PIN MAPPING - Methods and apparatus, including computer program products, are provided for connector interface mapping. In one aspect there is provided a method. The method may include detecting, at a first device, an orientation of a data connector connectable to a data interface, the data interface having a first portion and a second portion, the first portion coupled to a single port of a first type at the first device; sending, by the first device, the detected orientation information to a second device; and receiving, at the first device including the single port, data sent by the second device to the single port. Related apparatus, systems, methods, and articles are also described. | 2015-04-16 |
20150106545 | Computer Processor Employing Cache Memory Storing Backless Cache Lines - A computer processing system with a hierarchical memory system having at least one cache and physical memory, and a processor having execution logic that generates memory requests that are supplied to the hierarchical memory system. The at least one cache stores a plurality of cache lines including at least one backless cache line. | 2015-04-16 |
20150106546 | ORDERING A PLURALITY OF WRITE COMMANDS ASSOCIATED WITH A STORAGE DEVICE - A system, method, and computer program product are provided for ordering a plurality of write commands associated with a storage device. In operation, a plurality of write commands associated with a storage device to be sent to a device are identified. Additionally, an order of the plurality of write commands is determined, the determined order being known by the device. Further, the plurality of write commands are ordered in the determined order. | 2015-04-16 |
20150106547 | DISTRIBUTED MEMORY SYSTEMS AND METHODS - Apparatuses and methods are disclosed herein, including those that operate to receive memory requests from a processor over a high-speed communication interface and distribute the requests among a plurality of memory storage devices over lower-speed communication interfaces. | 2015-04-16 |
20150106548 | Managed-NAND With Embedded Random-Access Non-Volatile Memory - Systems and methods embed a random-access non-volatile memory array in a managed-NAND system to execute the boot code or other time-sensitive applications. By embedding this random-access non-volatile memory in the managed-NAND system, either on the memory controller chip or as a separate chip within the managed-NAND system package, an application may be read with fast initial access time, alleviating the slow access time limitations of NAND Flash technology. Depending on the size of the application, the system may be configured to read the whole application content or only a time-critical portion from this embedded random-access non-volatile memory array. | 2015-04-16 |
20150106549 | Robust Data Replication - Disclosed is a system for replicating data. The system may comprise a plurality of nodes preferably organised in groups with one of the nodes acting as a coordinator node. The nodes are configured to receive write requests from an external server and to apply these write requests to a data storage source of the data storage system. The write requests typically belong to a batch of independent write actions identified by a batch sequence number. Each node stores the write request in non-volatile memory with the coordinator node monitoring which batches are secured in their entirety in non-volatile memory. The coordinator node authorises all other nodes to sequentially replicate the write requests in their non-volatile memory to the data storage source for all writes up to the highest batch sequence number for which all writes have been secured in non-volatile memory. | 2015-04-16 |
20150106550 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME AND STORAGE MEDIUM - An information processing apparatus determines, when data is written to a semiconductor storage including a plurality of flash memories, whether or not the data to be written is specific data (data associated with the complete erasure) for which it is set that unnecessary data relating to the data is made to be erasable so that the unnecessary data does not remain in the semiconductor storage. In a case where it is determined that the data to be written is not the specific data, the information processing apparatus performs data write processing in a state where an interleave is enabled. Meanwhile, in a case where it is determined that the data to be written is not the specific data, the information processing apparatus performs data write processing in a state where the interleave is disabled. | 2015-04-16 |
20150106551 | SEMICONDUCTOR DEVICE AND OPERATING METHOD THEREOF - A semiconductor device remaps the relationship between logical addresses and physical addresses of a semiconductor memory device at each first interval. The semiconductor device may include a wear leveling controller configured to select a first physical address of the semiconductor memory device to remap a logical address corresponding to the first physical address of the semiconductor memory device to a second physical address of the semiconductor memory device, and to adjust the first interval. | 2015-04-16 |
20150106552 | METHOD FOR READING A DATA BLOCK OF A NONVOLATILE MEMORY OF A CONTROL UNIT - A method for reading a data block of a nonvolatile memory of a processing unit, the nonvolatile memory being subdivided into sectors; the sectors being written to consecutively in each case from a sector beginning to a sector end with different versions of different data blocks; a current version of a data block being written to a current position in a current sector; in a cache memory, for each data block, an entry being present that characterizes the respective data block. | 2015-04-16 |
20150106553 | SOLID STATE DRIVE CARD AND AN ELECTRONIC SYSTEM INCLUDING THE SAME - Provided are a solid state drive (SSD) card and an electronic system including the same. The electronic system includes a main board to which an input device and an output device are connected. A central processing unit (CPU) and a platform hub (PH) are provided on the main board. The PH is electrically connected to a hybrid interface socket. The hybrid interface socket includes a secure digital (SD) card interface and a non-SD card interface. When the SSD card and the electronic system including the same are used, a storage capacity may be conveniently upgraded to a higher capacity. Also, since the hybrid interface socket is provided in place of a conventional SD card socket, additional space is not required and thus space may be efficiently used. | 2015-04-16 |
20150106554 | Regrouping and Skipping Cycles in Non-Volatile Memory - A non-volatile memory system utilizes multiple programming cycles to write units of data, such as a logical page of data, to a non-volatile memory array. User data is evaluated before writing to determine whether programming can be skipped for bay addresses. The system determines whether programming can be skipped for an initial set of bay groups. If a bay group cannot be skipped, the system determines whether the bay group includes individual bays that may be skipped. Bays are regrouped into new bay groups to reduce the number of BAD cycles during programming. Independent column addressing for multiple bays within a bay group is provided. During a column address cycle, a separate column address is provided to the bays to select different columns for programming within each bay. By simultaneously programming multiple column addresses during a single column address cycle, the system may skip programming for some column address cycles. | 2015-04-16 |
20150106555 | NONVOLATILE SEMICONDUCTOR STORAGE SYSTEM - A nonvolatile semiconductor storage system has multiple nonvolatile semiconductor storage media, a control circuit having a media interface group (one or more interface devices) coupled to the multiple nonvolatile semiconductor storage media, and multiple switches. The media interface group and the multiple switches are coupled via data buses, and each switch and each of two or more nonvolatile chips are coupled via a data bus. The switch is configured so as to switch a coupling between a data bus coupled to the media interface group and a data bus coupled to any of multiple nonvolatile chips that are coupled to this switch. The control circuit partitions write-target data into multiple data elements, switches a coupling by controlling the multiple switches, and distributively sends the multiple data elements to multiple nonvolatile chips. | 2015-04-16 |
20150106556 | Endurance Translation Layer (ETL) and Diversion of Temp Files for Reduced Flash Wear of a Super-Endurance Solid-State Drive - A flash drive has increased endurance and longevity by reducing writes to flash. An Endurance Translation Layer (ETL) is created in a DRAM buffer and provides temporary storage to reduce flash wear. A Smart Storage Switch (SSS) controller assigns data-type bits when categorizing host accesses as paging files used by memory management, temporary files, File Allocation Table (FAT) and File Descriptor Block (FDB) entries, and user data files, using address ranges and file extensions read from FAT. Paging files and temporary files are never written to flash. Partial-page data is packed and sector mapped by sub-sector mapping tables that are pointed to by a unified mapping table that stores the data-type bits and pointers to data or tables in DRAM. Partial sectors are packed together to reduce DRAM usage and flash wear. A spare/swap area in DRAM reduces flash wear. Reference voltages are adjusted when error correction fails. | 2015-04-16 |
20150106557 | Virtual Memory Device (VMD) Application/Driver for Enhanced Flash Endurance - A Virtual-Memory Device (VMD) driver and application execute on a host to increase endurance of flash memory attached to a Super Enhanced Endurance Device (SEED) or Solid-State Drive (SSD). Host accesses to flash are intercepted by the VMD driver using upper and lower-level filter drivers and categorized as data types of paging files, temporary files, meta-data, and user data files, using address ranges and file extensions read from meta-data tables. Paging files and temporary files are optionally written to flash. Full-page and partial-page data are grouped into multi-page meta-pages by data type before storage by the SSD. Ramdisks and caches for storing each data type in the host DRAM are managed and flushed to the SSD by the VMD driver. Write dates are stored for pages or blocks for management functions. A spare/swap area in DRAM reduces flash wear. Reference voltages are adjusted when error correction fails. | 2015-04-16 |
20150106558 | SEMICONDUCTOR DEVICE AND DATA PROCESSING METHOD - A semiconductor device has: as security states to which the nonvolatile memory device can transition, an unprotected state in which, when secret information is not set in the nonvolatile memory device, rewriting the nonvolatile memory device is permitted, and reading the stored information is permitted; a protection unlocked state in which, when the secret information is set in the nonvolatile memory device, rewriting the nonvolatile memory device is permitted on condition that a result of authentication using the secret information is correct, and reading the stored information is permitted; and a protection locked state in which, when the secret information is set in the nonvolatile memory device, rewriting the nonvolatile memory device is inhibited until correctness as a result of authentication using the secret information is confirmed, and reading the stored information is inhibited under a predetermined condition. | 2015-04-16 |
20150106559 | NONVOLATILE STORAGE DEVICE AND OPERATING SYSTEM (OS) IMAGE PROGRAM METHOD THEREOF - A nonvolatile storage device in accordance with the inventive concepts includes a nonvolatile memory device comprising a first memory area, a second memory area, and a memory controller. The memory controller includes a first register configured to store reliable mode information, and a second register configured to store operating system (OS) image information. The memory controller is configured to receive a command from a host based on the reliable mode information; determine whether the command is a write request for an OS image and whether OS image information accompanying the command matches the OS image information stored in the second register; write the OS image to the first memory area if the OS image information accompanying the command matches the OS image information stored in the second register, and block data migration of the OS image from the first memory area to the second memory area. | 2015-04-16 |
20150106560 | METHODS AND SYSTEMS FOR MAPPING A PERIPHERAL FUNCTION ONTO A LEGACY MEMORY INTERFACE - A memory system includes a CPU that communicates commands and addresses to a main-memory module. The module includes a buffer circuit that relays commands and data between the CPU and the main memory. The memory module additionally includes an embedded processor that shares access to main memory in support of peripheral functionality, such as graphics processing, for improved overall system performance. The buffer circuit facilitates the communication of instructions and data between CPU and the peripheral processor in a manner that minimizes or eliminates the need to modify CPU, and consequently reduces practical barriers to the adoption of main-memory modules with integrated processing power. | 2015-04-16 |
20150106561 | MEMORY COMPONENT WITH ADJUSTABLE CORE-TO-INTERFACE DATA RATE RATIO - A memory component includes a memory bank comprising a plurality of storage cells and a data interface block configured to transfer data between the memory component and a component external to the memory component. The memory component further includes a plurality of column interface buses coupled between the memory bank and the data interface block, wherein a first column interface bus of the plurality of column interface buses is configured to transfer data between a first storage cell of the plurality of storage cells and the data interface block during a first access operation and wherein a second column interface bus of the plurality of column interface buses is configured to transfer the data between the first storage cell and the data interface block during a second access operation. | 2015-04-16 |
20150106562 | SECURE DATA ERASURE SYSTEM - An erasure system and method for sorting, tracking, and erasing a plurality of data storage devices using enterprise hardware and software designed for data storage. The erasure system may include a server, drive arrays having receptacles for communicably coupling with the data storage devices, and a drive array controller configured for communicably coupling the server with the drive arrays. The server may receive specification information regarding each of the drive arrays and each of the data storage devices in the receptacles of the drive arrays for erasure and logging purposes. Then the server may overwrite each of the data storage devices according to the DoD 5220.22-M standard, thereby erasing the data storage devices. The server may also create log files corresponding to each of the data storage devices, including information like time, date, and if the erasure of the data storage device is complete or has failed. | 2015-04-16 |
20150106563 | EFFICIENT SUPPORT FOR DRIVES WITH DIFFERENT SECTOR ALIGNMENTS IN A RAID LAYOUT - In one embodiment, a method includes receiving an input/output (I/O) request for data that starts or ends at a location other than a physical sector boundary of the device. The method further includes reading, starting at a first physical sector boundary before a beginning location specified in the I/O request and ending at a second physical sector boundary after an ending location specified in the request. | 2015-04-16 |
20150106564 | STORAGE SYSTEM AND METHOD FOR REDUCING ENERGY CONSUMPTION - A system and method that include configuring local disk drives of a local storage system so that at any given point of time, a first part of the local disk drives operate in a low power state and a second part of the local disk drives operate in an active state; and in response to a read request of a data portion on a local disk drive of the local disk drives: determining whether the local disk drive currently operates in the low power state; reading the data portion from the local disk drive, if the local disk drive does not currently operate in the low power state; if the local disk drive currently operates in the low power state, enquiring if a remote mirror disk drive that stores a copy of the data portion currently operates in the low power state; wherein the remote mirror disk drive is comprised in a remote storage system that is coupled to the local storage system; and if the remote mirror disk drive does not currently operate in the low power state, requesting from the remote storage system to read the copy of the data portion from the remote minor disk drive. | 2015-04-16 |
20150106565 | STORAGE CONTROLLING APPARATUS, INFORMATION PROCESSING APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN STORAGE CONTROLLING PROGRAM - A storage controlling apparatus includes a processor. The processor estimates, when a new virtual machine is to be produced, an access frequency to a new virtual disk to be allocated to the new virtual machine based on an access frequency to an existing virtual disk allocated to an existing virtual machine produced from master information on which the new virtual machine is based, and temporarily reserves, when the estimated access frequency exceeds a first threshold value, a plurality of successive allocation unit regions in a physical disk for the new virtual disk. | 2015-04-16 |
20150106566 | Computer Processor Employing Dedicated Hardware Mechanism Controlling The Initialization And Invalidation Of Cache Lines - A computer processing system includes execution logic that generates memory requests that are supplied to a hierarchical memory system. The computer processing system includes a hardware map storing a number of entries associated with corresponding cache lines, where each given entry of the hardware map indicates whether a corresponding cache line i) currently stores valid data in the hierarchical memory system, or ii) does not currently store valid data in hierarchical memory system and should be interpreted as being implicitly zero throughout. | 2015-04-16 |
20150106567 | Computer Processor Employing Cache Memory With Per-Byte Valid Bits - A computer processing system with a hierarchical memory system that associates a number of valid bits for each cache line of the hierarchical memory system. The valid bits are provided for each cache line stored in a respective cache and make explicit which bytes are semantically defined and which are not for the associated given cache line. Memory requests to the cache(s) of the hierarchical memory system can include an address specifying a requested cache line as well as a mask that includes a number of bits each corresponding to a different byte of the requested cache line. The values of the bits of the byte mask indicate which bytes of the requested cache line are to be returned from the hierarchical memory system. The memory request is processed by the top level cache of the hierarchical memory system, looking for one or more valid bytes of the requested cache line corresponding to the target address of the memory request. The valid bytes of the cache line corresponding to the byte mask as stored in cache can be identified by reading out the valid bit(s) and data byte(s) stored by the cache for putative matching cache lines for those data bytes that are specified by the byte mask of the memory request, while ignoring the valid bit(s) and data byte(s) stored by the cache for putative matching cache lines for those data bytes that are not specified by the byte mask of the memory request. Extensions to shared multiprocessor systems is also described and claimed. | 2015-04-16 |
20150106568 | MULTI-TIERED CACHING FOR DATA STORAGE MANAGEMENT IN A DEVICE - A data storage device includes one or more storage media that include multiple physical storage locations. The device also includes at least one cache memory having a logical space that includes a plurality of separately managed logical block address (LBA) ranges. Additionally, a controller is included in the device. The controller is configured to receive data extents addressed by a first LBA and a logical block count. The controller is also configured to identify at least one separately managed LBA range of the plurality of separately managed LBA ranges in the at least one cache memory based on LBAs associated with at least some of the received data extents. The controller stores the at least some of the received data extents in substantially monotonically increasing LBA order in at least one physical storage location, of the at least one cache memory, assigned to the identified at least one LBA range. | 2015-04-16 |
20150106569 | CHIP STACK CACHE EXTENSION WITH COHERENCY - By arranging dies in a stack such that failed cores are aligned with adjacent good cores, fast connections between good cores and cache of failed cores can be implemented. Cache can be allocated according to a priority assigned to each good core, by latency between a requesting core and available cache, and/or by load on a core. | 2015-04-16 |
20150106570 | CACHE METHOD AND CACHE APPARATUS - A cache apparatus stores part of a plurality of accessible data blocks into a cache area. A calculation part calculates, for each pair of data blocks of the plurality of data blocks, an expected value of the number of accesses made after one of the data blocks is accessed until the other of the data blocks is accessed, on the basis of a probability that when each of the plurality of data blocks is accessed, each data block that is likely to be accessed next is accessed next. When a data block is read from outside the cache area, a determination part determines a data block to be discarded from the cache area, on the basis of the expected value of the number of accesses made after the read data block is accessed until each of the plurality of data blocks is accessed. | 2015-04-16 |
20150106571 | SYSTEM AND METHOD FOR MANAGING CACHE COHERENCE IN A NETWORK OF PROCESSORS PROVIDED WITH CACHE MEMORIES - A cache coherence management system includes: a set of directories distributed between nodes of a network for interconnecting processors including cache memories, each directory including a correspondence table between cache lines and information fields on the cache lines; and a mechanism updating the directories by adding, modifying, or deleting cache lines in the correspondence tables. In each correspondence table and for each cache line identified, at least one field is provided for indicating a possible blocking of a transaction relative to the cache line considered, when the blocking occurs in the node associated with the correspondence table considered. The system further includes a mechanism detecting fields indicating a transaction blocking and restarting each transaction detected as blocked from the node in which it is indicated as blocked. | 2015-04-16 |
20150106572 | SCRIPTED MULTIPROCESS PROBING WITHOUT SYSTEM PRIVILEGE - A controller process loads a module based on a user-generated script into itself. The controller process also generates a shared memory mapping using offset pointers as opposed to absolute pointers. The controller process loads the module and the shared memory mapping into target processes indicated by the user-generated script in order to probe the target processes. | 2015-04-16 |
20150106573 | DATA PROCESSING SYSTEM - A data processing system includes a host device including a first working memory and a data storage device suitable for responding to an access request from the host device. The data storage device includes a controller suitable for controlling an operation of the data storage device, a second working memory suitable for storing data used for driving of the controller, and an access controller suitable for accessing a shared memory region of the first working memory under the control of the controller. | 2015-04-16 |
20150106574 | Performing Processing Operations for Memory Circuits using a Hierarchical Arrangement of Processing Circuits - The described embodiments include a computing device that comprises at least one memory die having memory circuits and memory die processing circuits, and a logic die coupled to the at least one memory die, the logic die having logic die processing circuits. In the described embodiments, the memory die processing circuits are configured to perform memory die processing operations on data retrieved from or destined for the memory circuits and the logic die processing circuits are configured to perform logic die processing operations on data retrieved from or destined for the memory circuits. | 2015-04-16 |
20150106575 | DATA WRITING DEVICE AND METHOD - A data writing device includes a processor that executes a procedure. The procedure includes: performing first writing that writes data to a storage region of the storage section; and performing second writing that writes command execution data representing an execution state of each command of a program including a plurality of commands to an expected storage region, among the plurality of storage regions of the storage section, where it is expected that the first writing has not been performed. | 2015-04-16 |
20150106576 | DYNAMIC RECORD MANAGEMENT FOR SYSTEMS UTILIZING VIRTUAL STORAGE ACCESS METHOD (VSAM) - In one embodiment, a computer program product for modifying a virtual storage access method (VSAM) data set during open time, the computer program product including a computer readable storage medium having computer readable program code embodied therewith, the embodied computer readable program code including computer readable program code configured to open a VSAM data set, and computer readable program code configured to modify a VSAM control block structure for the VSAM data set while the VSAM data set is open during an open time in which static data set characteristics and/or job parameters have been defined for the VSAM data set, wherein the computer readable program code configured to modify the VSAM control block structure includes computer readable program code configured to interact with the VSAM data set within a VSAM dynamic address space using at least one of: a VSAM console interface and a VSAM programming interface. | 2015-04-16 |
20150106577 | DE-INTERLEAVING ON AN AS-NEEDED BASIS - One embodiment is an apparatus having a memory, a controller, and a de-interleaving module. The memory is configured to store portions of a set of interleaved values, where the set of interleaved values correspond to a single application of an interleaving mapping to a set of un-interleaved values. The controller is configured to retrieve each portion from an other memory that stores the set of interleaved values by moving the portion from the other memory to the memory. The de-interleaving module is configured to de-interleave the interleaved values in at least one of the portions to generate a de-interleaved portion such that processing downstream of the de-interleaving module can begin processing the de-interleaved portion before all of the interleaved values in the set of interleaved values are de-interleaved by the de-interleaving module. | 2015-04-16 |
20150106578 | SYSTEMS, METHODS AND DEVICES FOR IMPLEMENTING DATA MANAGEMENT IN A DISTRIBUTED DATA STORAGE SYSTEM - Systems, methods and devices for monitoring data transactions in a data storage system, the data storage system being in network communication with a plurality of storage resources and comprising at least a data analysis module and a logging module, and receiving at the data analysis module at least one data transaction for data in the data storage system, each data transaction having at least one data-related characteristic; storing in the logging module the at least one data-related characteristic and a data transaction identifier that relates the data transaction to the associated at least one data-related characteristic in the logging module; analyzing at the data analysis module at least one data-related characteristic related to a first data transaction to determine if the first data transaction shares at least one data-related characteristic with other data transactions; and, in cases where the first data transaction shares at least one data-related characteristic with at least one other data transaction, logically linking the first data transaction with the other data transactions. | 2015-04-16 |
20150106579 | Forward-Only Paged Data Storage Management - Computer-implemented methods and systems for managing data in one or more data storage media are provided. An example method may comprise creating a data structure within the data storage media. The data structure includes a plurality of memory pages, each page comprising a plurality of sessions, and each session comprising a header and a plurality of data objects. The method also comprises enabling writing data to the data storage medium, in response to routine requests, such that the data is recorded to the one or more data objects nearest the current location of a virtual cursor. When a data management operation is performed, the virtual cursor is moved within a single page in a single direction. | 2015-04-16 |
20150106580 | SYSTEM AND METHOD FOR PERFORMING BACKUP OR RESTORE OPERATIONS UTILIZING DIFFERENCE INFORMATION AND TIMELINE STATE INFORMATION - Systems and methods for backing-up data from a first storage pool to a second storage pool using difference information between time states are disclosed. The system has a data management engine for performing data management functions, including at least a back-up function to create a back-up copy of data. By executing a sequence of snapshot operations to create point-in-time images of application data on a first storage pool, each successive point-in-time image corresponding to a specific, successive time-state of the application data, a series of snapshots is created. The snapshots are then used to create difference information indicating which application data has changed and the content of the changed application data for the corresponding time state. This difference information is then sent to a second storage pool to create a back-up copy of data for the current time-state. | 2015-04-16 |
20150106581 | STORAGE MANAGEMENT DEVICE, INFORMATION PROCESSING SYSTEM, STORAGE MANAGEMENT METHOD, AND RECORDING MEDIUM - A storage management device includes: a memory; and a processor coupled to the memory. The processor executes a process including: managing a plurality of storages in a system in which a data storage destination is switched between a first storage and a second storage; first performing management to cause the first storage to hold data as master data and cause the second storage to hold data equivalent to the master data as backup data; and second performing management to cause the second storage to hold update data for the backup data held in the second storage independently from the backup data and cause a third storage different from the first storage and the second storage to duplicate the update data when the data storage destination is switched from the first storage to the second storage. | 2015-04-16 |
20150106582 | APPARATUS AND METHOD FOR MANAGING DATA IN HYBRID MEMORY - An apparatus and method for managing data in hybrid memory are disclosed. The apparatus for managing data in hybrid memory may include a page access prediction unit, a candidate page classification unit, and a page placement determination unit. The page access prediction unit predicts an access frequency value for each page for a specific period in a future based on an access frequency history generated for the page. The candidate page classification unit classifies the page as a candidate page for migration based on the predicted access frequency value for the page. The page placement determination unit determines a placement option for the classified candidate page. | 2015-04-16 |
20150106583 | STORAGE SPACE MAPPING METHOD AND APPARATUS - Embodiments of the present invention provide a storage space mapping method and apparatus. The method includes: parsing source code, so as to acquire a home file and/or a home folder of each function and/or variable in the source code; acquiring a mapping relationship between the home file and a storage area identifier and/or between the home folder and a storage area identifier, and establishing, according to the mapping relationship, a mapping relationship between each function and/or variable and the storage area identifier; and mapping, according to a mapping relationship between the storage area identifier and storage space, each function and/or variable to the storage space. According to the storage space mapping method and apparatus provided in the embodiments of the present invention, development workload and maintenance costs of storage space mapping can be greatly reduced. | 2015-04-16 |
20150106584 | System and Method for Simultaneously Storing and Read Data From A Memory System - A system and method for providing high-speed memory operations is disclosed. The technique uses virtualization of memory space to map a virtual address space to a larger physical address space wherein no memory bank conflicts will occur. The larger physical address space is used to prevent memory bank conflicts from occurring by moving the virtualized memory addresses of data being written to memory to a different location in physical memory that will eliminate a memory bank conflict. This allows the memory system to both store and read data in the same cycle with no conflicts. | 2015-04-16 |
20150106585 | ADDRESS GENERATION IN A DATA PROCESSING APPARATUS - A data processing apparatus is provided comprising processing circuitry and an instruction decoder responsive to program instructions to control processing circuitry to perform the data processing. The instruction decoder is responsive to an address calculating instruction to perform an address calculating operation for calculating a partial address result from a non-fixed reference address and a partial offset value such that a full address specifying a memory location of an information entity is calculable from said partial address result using at least one supplementary program instruction. The partial offset value has a bit-width greater than or equal to said instruction size and is encoded within at least one partial offset field of said address calculating instruction. A corresponding data processing method, virtual machine and computer program product are also provided. | 2015-04-16 |
20150106586 | METHOD AND A DEVICE FOR CONTROLLING MEMORY-USAGE OF A FUNCTIONAL COMPONENT - The invention relates to controlling memory-usage of a functional component, e.g. a network interface of a router or a switch. A portion of a virtual memory organized to comprise virtual memory pages is reserved ( | 2015-04-16 |
20150106587 | DATA REMAPPING FOR HETEROGENEOUS PROCESSOR - A processor remaps stored data and the corresponding memory addresses of the data for different processing units of a heterogeneous processor. The processor includes a data remap engine that changes the format of the data (that is, how the data is physically arranged in segments of memory) in response to a transfer of the data from system memory to a local memory hierarchy of an accelerated processing module (APM) of the processor. The APM's local memory hierarchy includes an address remap engine that remaps the memory addresses of the data at the local memory hierarchy so that the data can be accessed by routines at the APM that are unaware of the data remapping. By remapping the data, and the corresponding memory addresses, the APM can perform operations on the data more efficiently. | 2015-04-16 |
20150106588 | Computer Processor Employing Hardware-Based Pointer Processing - A computer processor is provided with execution logic that performs operations that utilize pointers stored in memory. In one aspect, each pointer is associated with a predefined number of event bits. The execution logic processes the event bits of a given pointer in conjunction with processing a predefined pointer-related operation involving the given pointer in order to selectively output an event-of-interest signal. | 2015-04-16 |
20150106589 | SMALL FORM HIGH PERFORMANCE COMPUTING MINI HPC - A computing platform comprising a small form factor high performance computer for mobile high performance computing is provided. The computing platform comprises using small form factor design with a 64-core microprocessor/co-processor is provided. The small form factor high performance computer may include 64-core microprocessor/co-processors based on the ANNI Stem Cell HPC multicore datacenter chipset cluster of REMTEC. | 2015-04-16 |
20150106590 | FILTERING OUT REDUNDANT SOFTWARE PREFETCH INSTRUCTIONS - The disclosed embodiments relate to a system that selectively filters out redundant software prefetch instructions during execution of a program on a processor. During execution of the program, the system collects information associated with hit rates for individual software prefetch instructions as the individual software prefetch instructions are executed, wherein a software prefetch instruction is redundant if the software prefetch instruction accesses a cache line that has already been fetched from memory. As software prefetch instructions are encountered during execution of the program, the system selectively filters out individual software prefetch instructions that are likely to be redundant based on the collected information, so that likely redundant software prefetch instructions are not executed by the processor. | 2015-04-16 |
20150106591 | INSTRUCTION AND LOGIC FOR PROCESSING TEXT STRINGS - Method, apparatus, and program means for performing a string comparison operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store a result of a comparison between each data element of a first and second operand corresponding to a first and second text string, respectively. | 2015-04-16 |
20150106592 | INSTRUCTION AND LOGIC FOR PROCESSING TEXT STRINGS - Method, apparatus, and program means for performing a string comparison operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store a result of a comparison between each data element of a first and second operand corresponding to a first and second text string, respectively. | 2015-04-16 |
20150106593 | INSTRUCTION AND LOGIC FOR PROCESSING TEXT STRINGS - Method, apparatus, and program means for performing a string comparison operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store a result of a comparison between each data element of a first and second operand corresponding to a first and second text string, respectively. | 2015-04-16 |
20150106594 | INSTRUCTION AND LOGIC FOR PROCESSING TEXT STRINGS - Method, apparatus, and program means for performing a string comparison operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store a result of a comparison between each data element of a first and second operand corresponding to a first and second text string, respectively. | 2015-04-16 |
20150106595 | PRIORITIZING INSTRUCTIONS BASED ON TYPE - Methods and reservation stations for selecting instructions to issue to a functional unit of an out-of-order processor. The method includes classifying each instruction into one of a number of categories based on the type of instruction. Once classified an instruction is stored in an instruction queue corresponding to the category in which it was classified. Instructions are then selected from one or more of the instruction queues to issue to the functional unit based on a relative priority of the plurality of types of instructions. This allows certain types of instructions (e.g. control transfer instructions, flag setting instructions and/or address generation instructions) to be prioritized over other types of instructions even if they are younger. | 2015-04-16 |
20150106596 | Data Processing System Having Integrated Pipelined Array Data Processor - A data processing system having a data processing core and integrated pipelined array data processor and a buffer for storing list of algorithms for processing by the pipelined array data processor. | 2015-04-16 |
20150106597 | Computer Processor With Deferred Operations - A computer processor and corresponding method of operation employs execution logic that includes at least one functional unit and operand storage that stores data that is produced and consumed by the at least one functional unit. The at least one functional unit is configured to execute a deferred operation whose execution produces result data. The execution logic further includes a retire station that is configured to store and retire the result data of the deferred operation in order to store such result data in the operand storage, wherein the retire of such result data occurs at a machine cycle following issue of the deferred operation as controlled by statically-assigned parameter data included in the encoding of the deferred operation. | 2015-04-16 |
20150106598 | Computer Processor Employing Efficient Bypass Network For Result Operand Routing - A computer processor is provided with a plurality of functional units that performs operations specified by the at least one instruction over the multiple machine cycles, wherein the operations produce result operands. The processor also includes circuitry that generates result tags dynamically according to the number of operations that produce result operands in a given machine cycle. A bypass network is configured to provide data paths for transfer of operand data between the plurality of functional units according to the result tags. | 2015-04-16 |
20150106599 | EXECUTION OF A PERFORM FRAME MANAGEMENT FUNCTION INSTRUCTION - Optimizations are provided for frame management operations, including a clear operation and/or a set storage key operation, requested by pageable guests. The operations are performed, absent host intervention, on frames not resident in host memory. The operations may be specified in an instruction issued by the pageable guests. | 2015-04-16 |
20150106600 | EXECUTION OF CONDITION-BASED INSTRUCTIONS - Execution of condition-based instructions is facilitated. A condition-based instruction is obtained, as well as a confidence level associated with the instruction. The confidence level is checked, and based on the confidence level being a first value, a predicted operation of the instruction, which is based on a predictor, is unconditionally performed. Further, based on the confidence level being a second value, a specified operation of the instruction, which is based on a determined condition, is conditionally performed. | 2015-04-16 |