46th week of 2013 patent applcation highlights part 67 |
Patent application number | Title | Published |
20130304913 | Programmable Presence Proxy for Determining a Presence Status of a User - A method and apparatus are provided that evaluate a number of different sources of presence information to determine a presence status of a user. The presence status of a user is determined by obtaining presence information from a plurality of presence data stores; translating the obtained presence information from at least one of the presence data stores into a standard format; and determining the presence status of the user based on the obtained presence information. Presence information can also be based on user-specified rules. Presence information is obtained from a number of presence data stores and the presence status of a user is determined based on one or more rules that are applied to the obtained presence information. The rules may include, for example, aggregation rules that determines the presence status based on one or more of the obtained presence information or filter rules that determine who may receive the presence status. | 2013-11-14 |
20130304914 | Topology Aware Content Delivery Network - A method of assigning a server to a client system includes determining an ingress point of the client system and identifying possible egress points for the client system. The method further includes selecting the server from a plurality of servers to reduce network cost and delivery time. | 2013-11-14 |
20130304915 | NETWORK SYSTEM, CONTROLLER, SWITCH AND TRAFFIC MONITORING METHOD - Fine traffic monitoring is achieved in a network in which sFlow and OpenFlow are combined. Specifically, flow identifiers (flow IDs) for identifying flows (or groups of packets) are prepared, and the flow identifiers are stored in entries of flow tables to allow sFlow agents to specify the flow identifiers as data sources. Specifically, the flow identifiers are stored in cookies of entries registered in flow tables of switches from a controller, and operations are performed for received packets matching the rules of the entries in accordance with the actions defined in the entries. In the switches, the flow identifiers specified as data sources in the MIBs used in sFlow are obtained and statistic information of packets matching the entries is obtained on the basis of the flow identifiers. | 2013-11-14 |
20130304916 | APPARATUS AND METHOD FOR TRANSMITTING LIVE MEDIA CONTENT - The present invention is directed to a method and an apparatus for sending live streams to regular HTTP clients. An incoming live media stream is segmented into segment files. A segment list is used to maintain the logical representation of the segment segment files so that they look like one continuous file. Each segment file is sent to the client through regular HTTP protocol once it is available. Old segment files can be deleted to save storage space and reduce management overhead. | 2013-11-14 |
20130304917 | METHOD AND APPARATUS FOR SUPPORTING ACCESS CONTROL LISTS IN A MULTI-TENANT ENVIRONMENT - In one embodiment, a method includes identifying common access control list (ACL) parameters and variable ACL parameters among a plurality of tenants in a network, mapping parameter values for the variable ACL parameters to the tenants, generating a multi-tenant access control list for the tenants, storing the multi-tenant access control list and mapping at a network device, and applying the multi-tenant access control list to ports at the network device. The multi-tenant access control list includes the common ACL parameters and variable ACL parameters. | 2013-11-14 |
20130304918 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR RESPONSE CRITERIA - Method, apparatus, and computer program product embodiments of the invention are disclosed for response criteria employable, for example, in connection with device discovery within wireless networks. In an example embodiment of the invention, a method comprises: receiving, at a device, a request, wherein said request conveys one or more registered unique identifiers, wherein the registered unique identifiers indicate device capabilities, and wherein said request conveys response criteria referencing the registered unique identifiers; determining, at the device, recognition of one or more of the referenced registered unique identifiers; determining, at the device, possession of device capabilities indicated by the recognized registered unique identifiers; and determining to dispatch, from the device, response to the request, wherein the dispatch is contingent upon the recognition and the possession. | 2013-11-14 |
20130304919 | METHOD AND APPARATUS FOR NOTIFYING REMOTE USER INTERFACE CLIENT ABOUT EVENT OF REMOTE USER INTERFACE SERVER IN HOME NETWORK - An event notifying method includes determining whether a current home network, which is currently connected to a remote user interface server (RUIS) in a home network, is a user's home network selected by a user so as to be allowed to be notified of the event, selectively providing an event page to a remote user interface client (RUIC) selected by a user in the user's home network, and performing user authentication prior to providing the event page, thereby ensuring security of the user's private information. | 2013-11-14 |
20130304920 | Controlling Access to Managed Objects in Networked Devices - Controlling access to managed objects associated with a networked device. A method comprises receiving a request from a principal for access to a managed object associated with the networked device. The managed objects are accessible based on membership in access groups that are compliant with a Simple Network Management Protocol (SNMP). A first and a second of the access groups associated with the principal are determined. Access privileges for the principal are determined, based on the first and the second access groups. Access to the managed object is granted if permitted based on the access privileges for the principal. | 2013-11-14 |
20130304921 | PCRF TRIGGERED RULES CLEAN-UP - Various embodiments relate to a system and related method of handling a plurality of user messages originating from a user device in a communications network. Various embodiments relate to a Policy Charging and Rules Node (PCRN) receiving an initial message from a first device, while anticipating a complementary message from a second device. Upon receipt of the complementary message, the PCRN may pair the messages and generate a rule from the paired message. If the PCRN does not receive the complementary message, the PCRN may generate the rule from only the initially-received message or may ignore the message. The PCRN may treat each received message independent from each other so that lack of receipt of a complementary message does not affect the creation of rules from another paired message. | 2013-11-14 |
20130304922 | SYSTEMS AND METHODS FOR CREATING VIRTUAL UNIVERSAL PLUG-AND-PLAY SYSTEMS - Methods and devices enable a device located on a source network to appear as a virtual device on a target network. Agent applications running on computers on the source and target networks communicate over a peer-to-peer network enabled by a super-peer networking server on the Internet. To share a device, the target network agent requests the source network agent to provide access to a device in the source network. The source network agent sends the device name, properties, and service template information to the target network agent. The target network agent uses the received information to announce itself as the device to the target network. Devices on the target network may request device services from the target network agent. Such requests are repackaged by the target network agent and sent to the source network agent. The source network agent redirects the service request to the actual device. | 2013-11-14 |
20130304923 | ALLOCATION AND RESERVATION OF VIRTUALIZATION-BASED RESOURCES - According to one aspect of the present disclosure a method and technique for allocating and reserving virtualization-based resources is disclosed. The method includes: receiving, by a virtualization-based resource management system, a reservation request to reserve a set of computing resources; dynamically allocating the set of computing resources to the reservation request; assigning a key to the allocated set of computing resources; and maintaining the allocated set of computing resources in a reserved state until a utilization request is received to utilize the allocated set of computing resources, the utilization request including the key. | 2013-11-14 |
20130304924 | System and Method for Predicting Meeting Subjects, Logistics, and Resources - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for predicting the subject, logistics, and resources of associated with a communication event. Predictions and suggestions can occur prior to, during, or in response to communication events. The user can confirm the prediction or suggestion via user input such as a click or a voice command. The system can analyze past behavior patterns with respect to the subject, logistics and resources of communication events, followed by preparing ranked listings of which subjects, logistics, and resources are most likely to be used in a given situation. The predicted logistics may then include people to invite, time and date of the meeting, its duration, location, and anything else useful in helping potential participants gather together. The resources may include files attached, files used, communication event minutes, recordings made, Internet browsers and other programs which may be utilized by the user. | 2013-11-14 |
20130304925 | CLOUD DEPLOYMENT ANALYSIS FEATURING RELATIVE CLOUD RESOURCE IMPORTANCE - Cloud resource provisioning is described. A cloud resource provisioning method may include obtaining cloud resource usage data by a process, wherein the cloud resource usage data identifies a cloud resource consumed by the process and a usage level associated with the cloud resource. The method may also include assigning an importance indicator to the cloud resource, and identifying a recommended cloud resource having available capacity with respect to the usage level in view of the importance indicator. | 2013-11-14 |
20130304926 | CONCURRENT LINKED-LIST TRAVERSAL FOR REAL-TIME HASH PROCESSING IN MULTI-CORE, MULTI-THREAD NETWORK PROCESSORS - Described embodiments process hash operation requests of a network processor. A hash processor determines a job identifier, a corresponding hash table, and a setting of a traversal indicator for a received hash operation request that includes a desired key. The hash processor concurrently generates a read request for a first bucket of the hash table, and provides the job identifier, the key and the traversal indicator to a read return processor. The read return processor stores the key and traversal indicator in a job memory and stores, in a return memory, entries of the first bucket of the hash table. If a stored entry matches the desired key, the read return processor determines, based on the traversal indicator, whether to read a next bucket of the hash table and provides the job identifier, the matching key, and the address of the bucket containing the matching key to the hash processor. | 2013-11-14 |
20130304927 | NETWORK ADDRESS TRANSLATION-BASED METHOD OF BYPASSING INTERNET ACCESS DENIAL - The network address translation (NAT)-based method of bypassing Internet access denial uses NAT as an identity-hiding technique to bypass Internet access denial. The victim network uses NAT routers as a gateway to connect to neighboring networks, and uses a set of non-blocked Internet protocol (IP) addresses as the NAT routers' external public IP addresses. These addresses are not part of the IP ranges registered to the victim network. Rather, they are obtained from a neighboring network. The outgoing packets, therefore, will not be blocked by the malicious ISP, as they will not be recognized as part of the victim network. The method is scalable and has minimal network performance impact. Although NAT introduces some connectivity limitations, these are overcome by using application-layer routing for server reachability behind NAT, and NAT traversal techniques for peer-to-peer (P2P) applications. | 2013-11-14 |
20130304928 | SYSTEM AND METHOD FOR MANAGING LATENCY IN A DISTRIBUTED TELEPHONY NETWORK - A system and method of preferred embodiments include at a signaling gateway of a first region, receiving a communication invitation of a first endpoint from a communication provider; signaling the communication invitation to a communication-processing server in a second region; in response to communication processing of the communication-processing server, dynamically directing signaling and media of the communication according to processing instructions and resources available in at least the first and two regions; wherein dynamically directing signaling and media communication of the communication comprises selectively routing media communication exclusively through communication resources of the first region if resources are available in the first region or selectively routing media communication between the first endpoint, the gateway, and at least the communication-processing server if media resources are not in the first region. | 2013-11-14 |
20130304929 | SYSTEM AND METHOD FOR MANAGING LATENCY IN A DISTRIBUTED TELEPHONY NETWORK - A system and method of preferred embodiments include at a signaling gateway of a first region, receiving a communication invitation of a first endpoint from a communication provider; signaling the communication invitation to a communication-processing server in a second region; in response to communication processing of the communication-processing server, dynamically directing signaling and media of the communication according to processing instructions and resources available in at least the first and two regions; wherein dynamically directing signaling and media communication of the communication comprises selectively routing media communication exclusively through communication resources of the first region if resources are available in the first region or selectively routing media communication between the first endpoint, the gateway, and at least the communication-processing server if media resources are not in the first region. | 2013-11-14 |
20130304930 | FORKING INTERWORKING - Communication systems, such as the long term evolution (LTE) of the third generation partnership project (3GPP), may interoperate with other communication systems. Interoperating communication systems may benefit from forking interworking in cases where, for example, one network supports session initiation protocol forking and one network does not. A method may include receiving a session initiation protocol invite request containing a session description protocol offer from a network not supporting forking. The method may also include receiving, in different early dialogues, a plurality of provisional session initiation protocol responses containing session description protocol answers. The method may further include storing information from the answers together with information about a related session information protocol dialogue. | 2013-11-14 |
20130304931 | SEAMLESS HOST MIGRATION BASED ON NAT TYPE - Systems and methods of the present invention for maintaining network data distribution are provided. Network data may be distributed in such as manner as to allow a network session to weather interrupted communications between host and clients without significant loss of data. Embodiments of the present invention provide for one or more clients to serve as backup host(s) for the network session, such determinations including the use of NAT profile information. When the other clients transmit data to the host, they may also transmit the data to one or more backup hosts if there are any indications of interrupted communication. | 2013-11-14 |
20130304932 | DATA COMMUNICATION PROTOCOL - Described is a data communication protocol, in which a client and server negotiate in a manner that does not require the client to retry negotiation when servers are not capable of the client-desired protocol. In one example implementation, the desired protocol is SMB 2.0 or greater. The protocol describes a create command with possibly additional context data attached for built-in extensibility, and a compound command comprising a plurality of related commands or unrelated commands. A multi-channel command requests data transfer on a separate data channel, a signed capability verification may be used to ensure that a secure connection is established, and the protocol provides the ability to transfer extended error data from the server in response to a request. | 2013-11-14 |
20130304933 | MULTI-NETWORK ENVIRONMENT ADAPTIVE MEDIA STREAMING TRANSMISSION METHOD AND APPARATUS - A multi-network environment adaptive media streaming method and apparatus. The method of transmitting media streaming includes encoding content to generate media data consisting of a plurality of layers; separating the generated media data into layers; and transmitting the media data separated into layers to a media receiving apparatus in a streaming format over a plurality of networks. | 2013-11-14 |
20130304934 | METHODS AND SYSTEMS FOR CONTROLLING QUALITY OF A MEDIA SESSION - Methods and systems for controlling quality of a media stream in a media session. The described methods and system control the quality of the media stream by controlling transcoding of the media session. The transcoding is controlled at the commencement of the media session and dynamically during the life of the media session. The transcoding is controlled by selecting a target quality of experience (QoE) for the media session, computing a predicted QoE for each of a plurality of control points, where each control point has a plurality of transcoding parameters associated therewith, selecting an control point of the plurality of control points, wherein the predicted QoE for the selected control point substantially corresponds with the target QoE and signaling the transcoder to use the selected control point for the media session. | 2013-11-14 |
20130304935 | Providing Sequence Data Sets for Streaming Video Data - A device may encapsulate video data such that Supplemental Enhancement Information (SEI) messages are stored separately from a sequence of coded video pictures described by the SEI messages. An example device includes a control unit configured to generate one or more SEI messages separate from the coded video pictures, wherein the SEI messages describe respective ones of the sequence of coded video pictures and include elements common to more than one of the coded video pictures, and an output interface configured to output the SEI messages separately from the sequence of coded video pictures. An example destination device may receive the SEI messages separately from the coded video pictures and render the coded video pictures using the SEI messages. | 2013-11-14 |
20130304936 | Managing Information Exchange Between Business Entities - Techniques for managing information exchange between business entities include identifying a plurality of routing rules stored in a database of a first business entity computing system; receiving a request for a business transaction through an application of a plurality of applications of the first business entity computing system; determining, based on the identified routing rules, an identifiable business context reference (IBCR) associated with a second business entity computing system, the IBC comprising a unique identifier associated with the second business entity and a first plurality of business data attributes associated with the second business entity; determining, based on the identified IBCR, a communication connection associated with the IBC and an identifiable business context (IBC) associated with the first business entity computing system; and initiating the business transaction between the first business entity computing system and the second business entity computing system through the determined communication connection. | 2013-11-14 |
20130304937 | INFORMATION CENTRIC NETWORK SYSTEM INCLUDING NETWORK PATH COMPUTATION DEVICE, CONTENT REQUEST NODE, AND RELAY NODE AND METHOD OF COMPUTING NETWORK PATH USING INFORMATION CENTRIC NETWORK - An information centric network system and a method of computing a network path using the information centric network system. The network path computation device includes: a network path representation unit configured to represent a network path using a BF; and a network path computation unit configured to, in response to a request from a content request node, compute the network path from the content request node to a content provider node, and, in response to the network path representation unit representing the network path using the BF, transmit the BF to the content request node. | 2013-11-14 |
20130304938 | MINIMIZING INTERFERENCE IN LOW LATENCY AND HIGH BANDWIDTH COMMUNICATIONS - A central coordinator can execute operations to minimize in-network contention and external network interference in a communication network. The central coordinator can determine to switch to alternate communication channel if performance of the alternate communication channel surpasses the performance of a current communication channel. A multicast channel switch message is transmitted to a plurality of client devices associated with the central coordinator. If an acknowledgement for the multicast channel switch message is not received from a first client device, the central coordinator causes remainder of the plurality of client devices to defer switching to the alternate communication channel and transmits a unicast channel switch message to the first client device. The central coordinator and the associated client devices switch to alternate communication channel after an acknowledgement is received from all the client devices. | 2013-11-14 |
20130304939 | Method and System for Integrated Circuit Card Device With Reprogrammability - An integrated circuit (IC) card interface device with multiple modes of operation allows communications with numerous IC cards, including smart cards. An interface device according to the present invention can be used several different ways, including: connected to a host device (such as a person computer); in a standalone configuration; and as a flexible platform upon which future applications can be based, since it can be easily reprogrammed and upgraded. Programming mode enables the host device or the smart card itself to update or upgrade the programs available within the interface device. When being updated or upgraded, the source of the programming can be from a host device or from the smart card, adding further flexibility to the use of such an interface device. | 2013-11-14 |
20130304940 | PROVIDING INDIRECT DATA ADDRESSING FOR A CONTROL BLOCK AT A CHANNEL SUBSYSTEM OF AN I/O PROCESSING SYSTEM - An computer program product, apparatus, and method for facilitating input/output (I/O) processing for an I/O operation at a host computer system configured for communication with a control unit. The computer program product is provided for performing a method including: obtaining a transport command word (TCW) for an I/O operation, the TCW specifying a location address and indicating whether the TCW directly or indirectly addresses a message for transmitting one or more commands to the control unit; extracting the specified location address from the TCW; obtaining the message from the specified location address based on the TCW indicating direct addressing, the message including one or more I/O commands; gathering one or more I/O commands from command locations specified by a list of addresses identified by the specified location address to form the message based on the TCW indicating indirect addressing; and forwarding the message to the control unit for execution. | 2013-11-14 |
20130304941 | Accessory Device Architecture - An accessory device architecture is described. In one or more implementations, data is received from an accessory device at an intermediate processor of a computing device, the data usable to enumerate functionality of the accessory device for operation as part of a computing device that includes the intermediate processor. The data is passed by the intermediate processor to an operating system executed on processor of the computing device to enumerate the functionality of the accessory device as part of the intermediate processor. | 2013-11-14 |
20130304942 | MULTI-MODE ADAPTER - An adapter can be used to connect a portable electronic device to an accessory in instances where the portable electronic device and the accessory have incompatible connectors. The adapter provides two connectors, one compatible with the portable electronic device and the other compatible with the accessory. The adapter has several modes of operation. The portable electronic device selects the appropriate mode of operation for the adapter once it receives information about the accessory connected to the adapter. The portable electronic device instructs the adapter to switch to the selected mode and in response the adapter configures its internal circuitry to enable the selected mode. The portable electronic device can then communicate with the accessory via the adapter. The presence of the adapter can be transparent to the accessory. | 2013-11-14 |
20130304943 | METHOD FOR BROADCAST PROCESSING WITH REDUCED REDUNDANCY - A method for broadcast forwarding in a SAS topology having a zoned portion of a service delivery system (ZPSDS) is disclosed. The ZPSDS includes at least a first zoning expander and a second zoning expander. The method includes originating a broadcast primitive on the first zoning expander; forwarding solely the broadcast primitive to the second zoning expander from the first zoning expander; initiating a discovery process from the second zoning expander upon receiving the broadcast primitive; and generating a source zone group list upon completion of the discovery process. | 2013-11-14 |
20130304944 | Device Enumeration Support - Device enumeration support techniques are described for busses that do not natively support enumeration. In one or more embodiments, an intermediate controller of a computing device is configured to interconnect and manage various hardware devices associated with the computing device. The intermediate controller may detect connection and disconnection of hardware devices in association with one or more communication busses employed by the computing device. In response to such detection, the intermediate controller may send appropriate notifications to an operating system to alert the operating system when hardware devices come and go. This enables the operating system to enumerate and denumerate hardware devices within a device configuration and power management system implemented by the operating system that facilitates interaction with the hardware devices through corresponding representations. | 2013-11-14 |
20130304945 | PERSONAL AREA NETWORK APPARATUS - A device comprises circuitry and a transceiver in communication with the circuitry. In operation, the device is configured to cause the transceiver to: periodically send a broadcast message to indicate the availability of the device for attachment to another device; receive, from the another device, a first pre-attachment message that is sent utilizing first information sent by the device; send, to the another device, a first response that is sent in response to the first pre- attachment message and includes second information; receive, from the another device, a second pre-attachment message that is sent utilizing the second information; send, to the another device, a second response that is sent in response to the second pre-attachment message; and communicate, with the another device, data utilizing a second one of the addresses for identification in association with the another device, for data transfer in connection with a group controlled by the device. | 2013-11-14 |
20130304946 | AUTOMATIC ATTACHMENT AND DETACHMENT FOR HUB AND PERIPHERAL DEVICES - A device comprises circuitry and a transceiver in communication with the circuitry. In operation, the device is configured to cause the transceiver to: periodically send a broadcast message to indicate the availability of the device for attachment to another device; receive, from the another device, a first pre-attachment message that is sent utilizing first information sent by the device; send, to the another device, a first response that is sent in response to the first pre-attachment message and includes second information; receive, from the another device, a second pre-attachment message that is sent utilizing the second information; send, to the another device, a second response that is sent in response to the second pre-attachment message; and communicate, with the another device, a data signal utilizing a second one of the addresses for identification in association with the another device, for data transfer in connection with a group controlled by the device. | 2013-11-14 |
20130304947 | SERIAL COMMUNICATION DEVICE, SERIAL COMMUNICATION SYSTEM, AND SERIAL COMMUNICATION METHOD - A serial communication device includes: a data transfer unit configured to repeat storing a predetermined unit of data, received by a receiving unit, in a receiving buffer and transfer data to a storage unit when data of a predetermined size is accumulated in the receiving buffer; a counting unit configured to count one of the number of times the predetermined unit of data is stored and an amount of data accumulated; a monitoring unit configured to monitor a count value counted by the counting unit; and a data identifying unit configured to determine that a current interval is a non-communication interval during which a sending source does not send data if the count value remains unchanged for a predetermined time and identify first data, received after the determination of the non-communication interval, as beginning data of a sequence of data including a plurality of pieces of data. | 2013-11-14 |
20130304948 | Managing A Direct Memory Access ('DMA') Injection First-In-First-Out ('FIFO') Messaging Queue In A Parallel Computer - Managing a direct memory access (‘DMA’) injection first-in-first-out (‘FIFO’) messaging queue in a parallel computer, including: inserting, by a messaging unit management module, a DMA message descriptor into the injection FIFO messaging queue; determining, by the messaging unit management module, the number of extra slots in an immediate messaging queue required to store DMA message data associated with the DMA message descriptor; and responsive to determining that the number of extra slots in the immediate message queue required to store the DMA message data is greater than one, inserting, by the messaging unit management module, a number of DMA dummy message descriptors into the injection FIFO messaging queue, wherein the number of DMA dummy message descriptors is at least as many as the number of extra slots in the immediate messaging queue that are required to store the DMA message data. | 2013-11-14 |
20130304949 | COMPUTER AND INPUT/OUTPUT CONTROL METHOD OF COMPUTER - An HBA driver manages a queue number for enqueuing and dequeuing data to an I/O queue by the main storage, and HBA-F/W manages a storage region at inside of HBA. The HBA driver reduces the number of access times by way of the PCIe bus by noticing an enqueued queue number or a dequeued queue number of an I/O queue to HBA-F/W by utilizing an MMIO area of the main storage in which a storage region on HBA is mapped. | 2013-11-14 |
20130304950 | CONFIGURABLE HEALTH-CARE EQUIPMENT APPARATUS - An apparatus, system and method for providing health-care equipment in a plurality of customizable configurations. A configuration includes a selection and arrangement of health-care equipment modules that each provide specialized support for the provision of health care, including the measurement of physiological parameters. Various types of configurations include those adapted to be mounted upon a desk top or a wall surface, or adapted for wheel mounting or hand-carriable mobile configurations. | 2013-11-14 |
20130304951 | METHODS AND STRUCTURE FOR IMPROVED AND DYNAMIC ZONING IN SERIAL ATTACHED SCSI EXPANDERS - Methods and structure for dynamically modifying SAS Zoning Features of a SAS expander based on present operating status of the expander. Rules are provided and interpreted within the expander to define changes to be made to the present SAS Zoning Features based on changes to the present operating status of the expander. The present operating status may be, for example, the present day, date, time of day, etc. Exemplary rules may define a modification to the zone group identifier to be associated with a PHY of the expander based on the present operating status of the expander. Exemplary rules may also define a modification to the zone permission defined for a pair of zone group identifiers. Further features and aspects hereof provide for a read-only zone permission value in addition the standards of the SAS specification. | 2013-11-14 |
20130304952 | METHODS AND STRUCTURE FOR CONFIGURING A SERIAL ATTACHED SCSI DOMAIN VIA A UNIVERSAL SERIAL BUS INTERFACE OF A SERIAL ATTACHED SCSI EXPANDER - Methods and structure are provided for managing a Serial Attached SCSI (SAS) domain via Universal Serial Bus (USB) communications. The system comprises a SAS expander. The SAS expander comprises a plurality of physical links, a USB interface, and a control unit. The control unit is operable to receive USB packets via the USB interface, to determine SAS management information based upon the received USB packets, and to alter a configuration of the SAS domain based upon the SAS management information determined from the USB packets. | 2013-11-14 |
20130304953 | CIRCUITRY TO GENERATE AND/OR USE AT LEAST ONE TRANSMISSION TIME IN AT LEAST ONE DESCRIPTOR - An embodiment may include circuitry that may generate and/or use, at least in part, at least one descriptor to be associated with at least one packet. The at least one descriptor may specify at least one transmission time at which the at least one packet is to be transmitted. The at least one transmission time may be specified in the at least one descriptor in such a manner as to permit the at least one transmission time to be explicitly identified based at least in part upon the at least one descriptor. Many alternatives, modifications, and variations are possible without departing from this embodiment. | 2013-11-14 |
20130304954 | Dynamically Optimizing Bus Frequency Of An Inter-Integrated Circuit ('I2C') Bus - Optimizing an I | 2013-11-14 |
20130304955 | Methods and Apparatuses for Trace Multicast Across a Bus Structure, and Related Systems - Systems and methods for trace multicast across a bus structure are provided. Preferably, the bus structure is that of a System-on-a-Chip (SoC), where the SoC includes a number of master components and a number of slave components connected via the bus structure. The bus structure supports a trace multicast feature. In one embodiment, the bus structure receives a bus transaction from a master component and, in response, outputs the bus transaction to a corresponding slave port. In addition, the bus structure determines whether a trace multicast is desired for the bus transaction. If a trace multicast is desired, the bus structure generates an additional bus transaction having one or more transaction attributes that include a translated version of the bus transaction and outputs the additional bus transaction to a trace slave port of the bus structure. The trace multicast feature provides a non-invasive mechanism for driver-level trace. | 2013-11-14 |
20130304956 | METHODS AND APPARATUSES FOR MULTIPLE PRIORITY ACCESS IN A WIRELESS NETWORK SYSTEM - In one embodiment, the method for registering to a wireless network includes transmitting a registration request from a device designated as having a low access priority. The registration request includes a value indicating that the device supports multiple access priorities. The multiple access priorities include the low access priority and at least one higher access priority. The method further includes requesting access when connecting to the wireless network based on a response to the registration request. | 2013-11-14 |
20130304957 | Method, System, and Apparatus for Dynamic Reconfiguration of Resources - A dynamic reconfiguration to include on-line addition, deletion, and replacement of individual modules of to support dynamic partitioning of a system, interconnect (link) reconfiguration, memory RAS to allow migration and mirroring without OS intervention, dynamic memory reinterleaving, CPU and socket migration, and support for global shared memory across partitions is described. To facilitate the on-line addition or deletion, the firmware is able to quiesce and de-quiesce the domain of interest so that many system resources, such as routing tables and address decoders, can be updated in what essentially appears to be an atomic operation to the software layer above the firmware. | 2013-11-14 |
20130304958 | System and Method for Processing Device with Differentiated Execution Mode - In accordance with an embodiment of the present invention, a method of operating a system includes operating in a first operating mode to not permit access to an address range, receiving a priority interrupt (PI) signal. The method further includes operating in a second operating mode to permit access to the address range in response to receiving the PI signal. | 2013-11-14 |
20130304959 | Handheld Device Ecosystem with Docking Devices - A handheld device that can be reconfigured according to environment data. The handheld device can connect to different docking stations and the handheld device may be reconfigured differently for each docking station. When the handheld device connects to a docking station, the handheld device receives a set of parameters (environment data) and the handheld device uses this set of parameters to reconfigure itself. The reconfiguration includes selecting an application based on the set of parameters and launching the selected application. | 2013-11-14 |
20130304960 | Apparatus, System and Method For Configuration of Adaptive Integrated Circuitry Having Fixed, Application Specific Computational Elements - The present invention concerns configuration of a new category of integrated circuitry for adaptive or reconfigurable computing. The preferred adaptive computing engine (ACE) IC includes a plurality of heterogeneous computational elements coupled to an interconnection network. The plurality of heterogeneous computational elements include corresponding computational elements having fixed and differing architectures, such as fixed architectures for different functions such as memory, addition, multiplication, complex multiplication, subtraction, configuration, reconfiguration, control, input, output, and field programmability. In response to configuration information, the interconnection network is operative to configure and reconfigure the plurality of heterogeneous computational elements for a plurality of different functional modes, including linear algorithmic operations, non-linear algorithmic operations, finite state machine operations, controller operations, memory operations, and bit-level manipulations. The preferred system embodiment includes an ACE integrated circuit coupled with the configuration information needed to provide an operating mode. Preferred methodologies include various means to generate and provide configuration information for various operating modes. | 2013-11-14 |
20130304961 | HUB CONTROL CHIP - A HUB control chip implemented in a specific package is provided. The HUB control chip includes a plurality of transmission modules and a plurality of pins. The plurality of the pins include: a plurality of data pin groups coupled to one of the plurality of transmission modules respectively. Each of the plurality of data pin groups includes: a first sub-group, receiving and transmitting a first pair of differential signals conforming to the USB 2.0 standard; a second sub-group, receiving a second pair of differential signals conforming to the USB 3.0 standard; and a third sub-group, transmitting a third pair of differential signals conforming to the USB 3.0 standard. The number of the plurality of the pins is less than or equal to 52. | 2013-11-14 |
20130304962 | FIRMWARE CLEANUP DEVICE - A firmware cleanup device includes a solid state disk (SSD) and an operation member. The SSD includes two pads and a connection portion, the connection portion defines two contacting pins respectively and electronically connected to the two pads. The operation member is detachably connected to the connection portion, the operation member includes two interconnected connection lines, and the two connection lines are respectively and electronically connected to the two contacting pins. | 2013-11-14 |
20130304963 | MEMORY MANAGING DEVICE AND METHOD AND ELECTRONIC APPARATUS - A memory managing device and method and an electronic apparatus are provided. The memory managing device is applied to a memory having a plurality of storage regions capable of being separated physically, comprising: a storage detecting unit for detecting the current storage status of the memory; a block computing unit for computing the current active block in the memory; a discreteness deciding unit for deciding whether the discreteness of a segment in the memory is larger than a predetermined threshold; a segment arranging unit for arranging the segment when the discreteness is larger than the predetermined threshold to move the active block to a set of storage regions whose number of the storage regions is less than that before the movement; and a power consumption setting unit for setting the storage regions other than the set of the storage regions in the memory to a low power consumption status. With the memory managing device and method and electronic apparatus according to the embodiment of this application, all of the active blocks in the memory can be concentrated into less physical storage regions so that the power consumption of the memory can be reduced while the efficiency of the usage of the memory can be increased. | 2013-11-14 |
20130304964 | DATA PROCESSING METHOD, AND MEMORY CONTROLLER AND MEMORY STORAGE DEVICE USING THE SAME - A data processing method for a re-writable non-volatile memory module is provided. The method includes receiving a write data stream associating to a logical access address of a logical programming unit; selecting a physical programming unit; and determining whether the write data stream associates with a kind of pattern. The method includes, if the write data stream associates with the kind of pattern, setting identification information corresponding to the logical access address as an identification value corresponding to the pattern, and storing the identification information corresponding to the logical access address into a predetermined area, wherein the write data stream is not programmed into the selected physical programming unit. The method further includes mapping the logical programming unit to the physical programming unit. Accordingly, the method can effectively shorten the time for writing data into the re-writable non-volatile memory module. | 2013-11-14 |
20130304965 | STORAGE UNIT MANAGEMENT METHOD, MEMORY CONTROLLER AND MEMORY STORAGE DEVICE USING THE SAME - A storage unit management method for managing a plurality of physical units in a rewritable non-volatile memory module is provided, wherein the physical units are at least grouped into a data area and a spare area. The method includes configuring a plurality of logical units for mapping to the physical units belonging to the data area, and determining whether the rewritable non-volatile memory module contains cold data. The method further includes performing a first wear-leveling procedure on the physical units if it is determined that the rewritable non-volatile memory module does not contain any cold data, and performing a second wear-leveling procedure on the physical units if it is determined that the rewritable non-volatile memory module contains the cold data. | 2013-11-14 |
20130304966 | NON-VOLATILE MEMORY DEVICE AND METHOD FOR PROGRAMMING THE SAME - A non-volatile memory device and a method for programming the same are disclosed. The non-volatile memory device includes first and second memory blocks, each of which includes a plurality of memory cells including a plurality of pages in which data is written; a data write unit, upon receiving a write signal and an address allocation signal, configured to write first data in a first page of the first memory block, and write second data in a first page of the second memory block; and a copy-back unit, upon receiving a chip idle signal and a copy-back control signal, configured to write the first data written in the first memory block into a second page of the second memory block. | 2013-11-14 |
20130304967 | INFORMATION MEMORY SYSTEM IN WHICH DATA RECEIVED SERIALLY IS DIVIDED INTO PIECES OF DATA AND MEMORY ABNORMALITY PROCESSING METHOD FOR AN INFORMATION MEMORY SYSTEM - An information memory system in which data received is divided into pieces of data, which are stored in memories in parallel, includes controller configured to storing a number of the divided pieces of data and monitoring a read request and a buffer full notice, in a case where the number of read requests does not reach the number of valid memory units and the buffer full notice continues in all buffers except for one buffer which does not output the read request, performing a read control corresponding to the buffers which output the buffer full notice, and performing control of the integration of a piece of data reconstructed, after being read from the memory unit corresponding to the buffer which does not output the read request and the pieces of data read from the memory units corresponding to the buffers which output the buffer full notice. | 2013-11-14 |
20130304968 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides. | 2013-11-14 |
20130304969 | PERFORMANCE IMPROVEMENT OF A CAPACITY OPTIMIZED STORAGE SYSTEM INCLUDING A DETERMINER - A system for storing data comprises a performance storage unit and a performance segment storage unit. The system further comprises a determiner. The determiner determines whether a requested data is stored in the performance storage unit. The determiner determines whether the requested data is stored in the performance segment storage unit in the event that the requested data is not stored in the performance storage unit. | 2013-11-14 |
20130304970 | SYSTEMS AND METHODS FOR PROVIDING HIGH PERFORMANCE REDUNDANT ARRAY OF INDEPENDENT DISKS IN A SOLID-STATE DEVICE - The present disclosure relates to systems and methods for providing high performance Redundant Array of Independent Disks (RAID) in a solid-state device. The present disclosure includes a solid state device. The solid state device can include a buffer having a plurality of bit cells, configured to maintain a plurality of bits of information. The solid state device can also include a memory controller configured to logically partition the plurality of bit cells into a plurality of logical blocks, each configured to maintain a data block. The solid state device can additionally include a RAID engine coupled to the buffer, where the buffer is configured to provide data blocks to the RAID engine, and in response, the RAID engine is configured to compute first parity bits from the data blocks and directly provide the first parity bits to one of a plurality of flash memory devices. | 2013-11-14 |
20130304971 | CONTROL DEVICE, STORAGE DEVICE, AND DATA WRITING METHOD - A control device includes a control unit that performs a writing control of supplied host data, according to a data writing request from a host apparatus, with respect to a non-volatile memory where multi-value storage with 2 bits or more is performed in one memory cell, having a lower level page and an upper level page for at least the multi-value storage as a physical page in which a physical address is set, and where data writing is performed using each physical page in an order of physical addresses, and that causes the data writing to be performed until the physical page immediately before the lower level page, such that the data writing according to a next data writing request is started from the lower level page. | 2013-11-14 |
20130304972 | CONTROL DEVICE, STORAGE DEVICE, AND STORAGE CONTROL METHOD - A control device includes a control unit that performs control of writing of data with respect to a first non-volatile memory, in which a size of a physical block that is a deletion unit is larger than a size of a physical page that is a minimum writing unit, and generates logical and physical address management information that indicates a correspondence relation between a physical page address and a logical address in a writing target physical block, in which data is written through the control of writing, so as to perform control so that the logical and physical address management information is stored in a second non-volatile memory every time data is written in the first non-volatile memory. | 2013-11-14 |
20130304973 | CONTROL DEVICE, STORAGE DEVICE, AND STORAGE CONTROL METHOD - A control device includes a control unit that performs control of writing of data with respect to a memory unit, in which a size of a physical block that is a deletion unit is larger than a size of a physical page that is a minimum writing unit, and generates logical and physical address management information that indicates a correspondence relation between a physical page address and a logical address in a writing target physical block, in which data is written through the control of writing, so as to perform control so that the logical and physical address management information is written in the writing target physical block. | 2013-11-14 |
20130304974 | SYSTEM AND METHOD FOR STORING DATA USING A FLEXIBLE DATA FORMAT - A flash storage device includes a flash storage for storing data and a controller for receiving a command in connection with user data and selecting a sector size associated with storing the user data. The controller allocates the user data among data sectors having the sector size and writes the data sectors to the flash storage. In some embodiments, the controller generates system data and stores the system data in the data sectors or a system sector, or both. | 2013-11-14 |
20130304975 | APPARATUSES FOR MANAGING AND ACCESSING FLASH MEMORY MODULE - A method for maintaining address mapping for a flash memory module is disclosed including: recording a first set of addresses corresponding to a first set of sequential logical addresses in a first section of a first addressing block; recording a second set of addresses corresponding to a second set of sequential logical addresses in a second section of the first addressing block; recording a third set of addresses corresponding to a third set of sequential logical addresses in a first section of a second addressing block; and recording a fourth set of addresses corresponding to a fourth set of sequential logical addresses in a second section of the second addressing block; wherein the second set of logical addresses is successive to the first set of logical addresses, and the third set of logical addresses is successive to the second set of logical addresses. | 2013-11-14 |
20130304976 | TECHNIQUES FOR PROVIDING DATA REDUNDANCY AFTER REDUCING MEMORY WRITES - A storage subsystem receives writes from a computer via a storage subsystem interface. The storage subsystem reduces a number of the writes. A single drive of the storage subsystem has primary and redundant storage devices with storage device interfaces. A disk controller of the single drive implements a data redundancy scheme by storing data associated with the reduced number of writes in the primary storage devices and by storing computed redundancy information in the redundant storage devices. The disk controller is operable without a loss of data in the presence of at least a single failure of any of the storage devices. Optionally the storage devices are flash memory devices. Optionally the disk controller is operable without a loss of data in the presence of at least two failures of any of the storage devices when a number of the redundant storage devices is at least two. | 2013-11-14 |
20130304977 | METHOD FOR PERFORMING MEMORY ACCESS MANAGEMENT, AND ASSOCIATED MEMORY DEVICE AND CONTROLLER THEREOF - A method for performing memory access management includes: with regard to a same Flash cell of a Flash memory, receiving a first digital value outputted by the Flash memory, requesting the Flash memory to output at least one second digital value, wherein the first digital value and the at least one second digital value are utilized for determining information of a same bit stored in the Flash cell, and a number of various possible states of the Flash cell correspond to a possible number of bit(s) stored in the Flash cell; based upon the second digital value, generating/obtaining soft information of the Flash cell, for use of performing soft decoding; and controlling the Flash memory to perform sensing operations by respectively utilizing a plurality of sensing voltages that are not all the same, in order to generate the first digital value and the second digital value. | 2013-11-14 |
20130304978 | HIGH-PERFORMANCE STORAGE STRUCTURES AND SYSTEMS FEATURING MULTIPLE NON-VOLATILE MEMORIES - A memory storage system that includes at least a storage controller, a first non-volatile, solid-state memory and a second non-volatile, solid-state memory. The storage controller has an interface to receive commands from a host system. The first non-volatile, solid-state memory device is coupled with the storage controller to at least store data received from the host system. The second non-volatile, solid-state memory is coupled with the storage controller to store context information corresponding to the data stored in the first non-volatile, solid-state memory device. | 2013-11-14 |
20130304979 | ACCESS CONTROL FOR NON-VOLATILE RANDOM ACCESS MEMORY ACROSS PLATFORM AGENTS - A controller is used in a computer system to control access to an NVRAM. The computer system includes a processor coupled to a non-volatile random access memory (NVRAM). The NVRAM is byte-rewritable and byte-erasable. The NVRAM stores data to be used by a set of agents including in-band agents and an out-of-band agent. The in-band agents run on a processor having one or more cores, and the out-of-band agent that runs on a non-host processing element. When the controller receives an access request from the out-of-band agent, the controller determines, based on attributes associated with the out-of-band agent, whether a region in the NVRAM is shareable by the out-of-band agent and at least one of the in-band agents. | 2013-11-14 |
20130304980 | AUTONOMOUS INITIALIZATION OF NON-VOLATILE RANDOM ACCESS MEMORY IN A COMPUTER SYSTEM - A non-volatile random access memory (NVRAM) is used in a computer system to store information that allows the NVRAM to autonomously initialize itself at power-on. The computer system includes a processor, an NVRAM controller coupled to the processor, and an NVRAM that comprises the NVRAM controller. The NVRAM is byte-rewritable and byte-erasable by the processor. The NVRAM stores a memory interface table containing information for the NVRAM controller to autonomously initialize the NVRAM upon power-on of the computer system without interacting with the processor and firmware outside of the NVRAM. The information is provided by the NVRAM controller to the processor to allow the processor to access the NVRAM. | 2013-11-14 |
20130304981 | Computer System and Method of Memory Management - Computer systems and methods for memory management in a computer system are provided. A computer system includes an integrated circuit, where the integrated circuit includes a processing unit and a memory controller coupled to the processing unit. The memory controller includes a first interface and a second interface configured to couple the memory controller with a first memory and a second memory, respectively. The second interface is separate from the first interface. The computer system includes the first memory of a first memory type coupled to the memory controller through the first interface. The computer system further includes the second memory coupled to the memory controller through the second interface, where the second memory is of a second memory type that has a different power consumption characteristic than that of the first memory type. | 2013-11-14 |
20130304982 | MEMORY DEVICE, MEMORY SYSTEM, AND OPERATING METHODS THEREOF - A memory device, a memory system, and operating methods thereof are provided. The method of operating the memory device, which includes a first memory cell and a second memory cell neighboring the first memory cell, includes counting a disturbance value of the second memory cell each time the first memory cell is accessed, updating a disturbance count value of the second memory cell based on the counting, adjusting a refresh schedule based on the disturbance count value of the second memory cell, a desired threshold and a maximum disturbance count value, and resetting the disturbance count value of the second memory cell and the maximum disturbance count value when the second memory cell is refreshed according to the adjusted refresh schedule. | 2013-11-14 |
20130304983 | DYNAMIC ALLOCATION OF RECORDS TO CLUSTERS IN A TERNARY CONTENT ADDRESSABLE MEMORY - Embodiments of the invention are directed to a TCAM for longest prefix matching in a routing system. The TCAM comprises a plurality of records of which a portion are configured into one or more address clusters each such cluster corresponding to a respective IP address prefix length and another portion of which are configured into a free cluster not corresponding to any IP address prefix length. | 2013-11-14 |
20130304984 | ENHANCED BLOCK COPY - The present disclosure includes methods and apparatus for an enhanced block copy. One embodiment includes reading data from a source block located in a first portion of the memory device, and programming the data to a target block located in a second portion of the memory device. The first and second portions are communicatively coupled by data lines extending across the portions. The data lines are communicatively uncoupled between the first and second portions for at least one of the reading and programming acts. | 2013-11-14 |
20130304985 | SYSTEMS AND METHODS OF MEDIA MANAGEMENT, SUCH AS MANAGEMENT OF MEDIA TO AND FROM A MEDIA STORAGE LIBRARY, INCLUDING REMOVABLE MEDIA - A system and method for determining media to be exported out of a media library is described. In some examples, the system determines a media component to be exported, determines the media component is in the media library for a specific process, and exports the media component after the process is completed. | 2013-11-14 |
20130304986 | SYSTEMS AND METHODS FOR SECURE HOST RESOURCE MANAGEMENT - Systems and methods are described herein to provide for secure host resource management on a computing device. Other embodiments include apparatus and system for management of one or more host device drivers from an isolated execution environment. Further embodiments include methods for querying and receiving event data from manageable resources on a host device. Further embodiments include data structures for the reporting of event data from one or more host device drivers to one or more capability modules. | 2013-11-14 |
20130304987 | DYNAMIC LOAD BALANCING OF DISTRIBUTED PARITY IN A RAID ARRAY - A parity pattern defines a repeated distribution of parity blocks within a distributed parity disk array (“DPDA”). The parity pattern identifies on which disks the parity block or blocks for a stripe are located. When a new disk is added to the DPDA, the parity pattern is modified so that the distribution of parity blocks within the parity pattern is even. Parity blocks within the DPDA are then redistributed to conform with the modified parity pattern. | 2013-11-14 |
20130304988 | SCHEDULING ACCESS REQUESTS FOR A MULTI-BANK LOW-LATENCY RANDOM READ MEMORY DEVICE - Described herein are method and apparatus for scheduling access requests for a multi-bank low-latency random read memory (LLRRM) device within a storage system. The LLRRM device comprising a plurality of memory banks, each bank being simultaneously and independently accessible. A queuing layer residing in storage system may allocate a plurality of request-queuing data structures (“queues”), each queue being assigned to a memory bank. The queuing layer may receive access requests for memory banks in the LLRRM device and store each received access request in the queue assigned to the requested memory bank. The queuing layer may then send, to the LLRRM device for processing, an access request from each request-queuing data structure in successive order. As such, requests sent to the LLRRM device will comprise requests that will be applied to each memory bank in successive order as well, thereby reducing access latencies of the LLRRM device. | 2013-11-14 |
20130304989 | INFORMATION PROCESSING APPARATUS - When a link unit detects a built-in WLAN memory card by using wireless communication, the link unit determines whether there is a match between the SSID of the detected built-in WLAN memory card and the SSID of the built-in WLAN memory card that is inserted into the PC. If the SSIDs match, it means the built-in WLAN memory card that has been detected using wireless communication is the built-in WLAN memory card that is inserted into the PC and the link unit consequently performs a control such that the WLAN unit does not acquire still images nor moving images. | 2013-11-14 |
20130304990 | Dynamic Control of Cache Injection Based on Write Data Type - Selective cache injection of write data generated or used by a coprocessor hardware accelerator in a multi-core processor system having a hierarchical bus architecture to facilitate transfer of address and data between multiple agents coupled to the bus. A bridge device maintains configuration settings for cache injection of write data and includes a set of n shared write data buffers used for write requests to memory. Each coprocessor hardware accelerator has m local write data cacheline buffers holding different types of write data. For write data produced by a coprocessor hardware accelerator, cache injection is accomplished based on configuration settings in a DMA channel dedicated to the coprocessor and a bridge controller. The access history of cache injected data for a particular processing thread or data flow is also tracked to determine whether to down grade or maintain a request for cache injection. | 2013-11-14 |
20130304991 | DATA PROCESSING APPARATUS HAVING CACHE AND TRANSLATION LOOKASIDE BUFFER - A data processing apparatus has a cache and a translation look aside buffer (TLB). A way table is provided for identifying which of a plurality of cache ways stores require data. Each way table entry corresponds to one of the TLB entries of the TLB and identifies, for each memory location of the page associated with the corresponding TLB entry, which cache way stores the data associated with that memory location. Also, the cache may be capable of servicing M access requests in the same processing cycle. An arbiter may select pending access requests for servicing by the cache in a way that ensures that the selected pending access requests specify a maximum of N different virtual page addresses, where N2013-11-14 | |
20130304992 | MULTI-CPU SYSTEM AND COMPUTING SYSTEM HAVING THE SAME - A multi-CPU data processing system, comprising: a multi-CPU processor, comprising: a first CPU configured with at least a first core, a first cache, and a first cache controller configured to access the first cache; and a second CPU configured with at least a second core, and a second cache controller configured to access a second cache, wherein the first cache is configured from a shared portion of the second cache. | 2013-11-14 |
20130304993 | Method and Apparatus for Tracking Extra Data Permissions in an Instruction Cache - Systems and methods are disclosed for maintaining an instruction cache including extended cache lines and page attributes for main cache line portions of the extended cache lines and, at least for one or more predefined potential page-crossing instruction locations, additional page attributes for extra data portions of the corresponding extended cache lines. In addition, systems and methods are disclosed for processing page-crossing instructions fetched from an instruction cache having extended cache lines. | 2013-11-14 |
20130304994 | Per Thread Cacheline Allocation Mechanism in Shared Partitioned Caches in Multi-Threaded Processors - Systems and methods for allocation of cache lines in a shared partitioned cache of a multi-threaded processor. A memory management unit is configured to determine attributes associated with an address for a cache entry associated with a processing thread to be allocated in the cache. A configuration register is configured to store cache allocation information based on the determined attributes. A partitioning register is configured to store partitioning information for partitioning the cache into two or more portions. The cache entry is allocated into one of the portions of the cache based on the configuration register and the partitioning register. | 2013-11-14 |
20130304995 | Scheduling Synchronization In Association With Collective Operations In A Parallel Computer - Methods, apparatuses, and computer program products for scheduling synchronization in association with collective operations in a parallel computer that includes a shared memory and a plurality of compute nodes that execute a parallel application utilizing the shared memory are provided. Embodiments include acquiring an available channel of the shared memory; posting to the acquired channel of the shared memory one or more collective operations and a synchronization point; determining that processing within the acquired channel has reached the synchronization point; and posting to the acquired channel, in response to determining that processing within the acquired channel has reached the synchronization point, a background synchronization operation corresponding to the one or more collective operations. | 2013-11-14 |
20130304996 | METHOD AND SYSTEM FOR RUN TIME DETECTION OF SHARED MEMORY DATA ACCESS HAZARDS - A system and method for detecting shared memory hazards are disclosed. The method includes, for a unit of hardware operating on a block of threads, mapping a plurality of shared memory locations assigned to the unit to a tracking table. The tracking table comprises an initialization bit as well as access type information, collectively called the state tracking bits for each shared memory location. The method also includes, for an instruction of a program within a barrier region, identifying a second access to a location in shared memory within a block of threads executed by the hardware unit. The second access is identified based on a status of the state tracking bits. The method also includes determining a hazard based on a first type of access and a second type of access to the shared memory location. Information related to the first access is provided in the table. | 2013-11-14 |
20130304997 | Command Throttling for Multi-Channel Duty-Cycle Based Memory Power Management - A technique for memory command throttling in a partitioned memory subsystem includes accepting, by a master memory controller included in multiple memory controllers, a synchronization command. The synchronization command includes command data that includes an associated synchronization indication (e.g., synchronization bit(s)) for each of the multiple memory controllers and each of the multiple memory controllers controls a respective partition of the partitioned memory subsystem. In response to receiving the synchronization command, the master memory controller forwards the synchronization command to the multiple memory controllers. In response to receiving the forwarded synchronization command each of the multiple memory controllers de-asserts an associated status bit. In response to receiving the forwarded synchronization command, each of the multiple memory controllers determines whether the associated synchronization indication is asserted. Each of the multiple memory controllers with the asserted associated synchronization indication then transmits the forwarded synchronization command to associated power control logic. | 2013-11-14 |
20130304998 | WRITE COMMAND OVERLAP DETECTION - The present disclosure includes methods and apparatuses that include write command overlap detection. A number of embodiments include receiving an incoming write command and comparing a logical address of the incoming write command to logical addresses of a number of write commands in a queue using a tree data structure, wherein a starting logical address and/or an ending logical address of the incoming write command and a starting logical address and/or an ending logical address of each of the number of write commands are associated with nodes in the tree data structure. | 2013-11-14 |
20130304999 | ELECTRONIC DEVICE AND SERIAL DATA COMMUNICATION METHOD - In a case where specific data (enable write data) is written in an enable/disenable register ( | 2013-11-14 |
20130305000 | SIGNAL PROCESSING CIRCUIT - A memory controller is connected to memory, and has no ECC (Error Check and Correct) function. An embedded CPU is connected to the memory via the memory controller such that it can access the memory. A memory check circuit is connected to the memory via the memory controller such that it can access the memory, and configured to access the memory in the non-operating period of the embedded CPU, so as to check the data stored in the memory. | 2013-11-14 |
20130305001 | Vector-Based Matching Circuit for Data Streams - Systems and methods are described relating to a matcher that inputs partial vectors at a rate of 1 per clock cycle and delivers complete vectors at the output with an indication per vector of its validity. The matcher can copy a maximum number of valid elements from an input queue to target vector in-order each clock cycle and eliminate copied elements from the input queue. The completely filled target vectors are paired with the complete data vectors and outputted as composite vectors. | 2013-11-14 |
20130305002 | SNAPSHOT MECHANISM - A memory management system for a thinly provisioned memory volume in which a relatively larger virtual address range of virtual address blocks is mapped to a relatively smaller physical memory comprising physical memory blocks via a mapping table containing entries only for addresses of the physical memory blocks containing data. The memory management system comprises a snapshot provision unit to take a given snapshot of the memory volume at a given time, the snapshot comprising a mapping table and memory values of the volume, the mapping table and memory values comprising entries only for addresses of the physical memory containing data. The snapshot is managed on the same thin provisioning basis as the volume itself, and the system is particularly suitable for RAM type memory disks. | 2013-11-14 |
20130305003 | STORAGE APPARATUS AND DATA MANAGEMENT METHOD - The present invention provides high-speed copying of a compressed data volume. | 2013-11-14 |
20130305004 | MIGRATION OF DATA IN A DISTRIBUTED ENVIRONMENT - Aspects migrate dynamically changing data sets in a distributed application environment. Writes to a source device are intercepted and it is determined whether data in the source device is being migrated to a target device. If data is being migrated, then the intercepted write is minor written synchronously to both the source and the target. Data being migrated is read from a region of the source and written to a region of the target and also to a mirror writing memory location. The source region data is re-read and compared to the originally read data that is written to the minor writing memory location. If the compared data does not match, the data migration from the source region to the target region (and to the minor writing memory location) is repeated until the originally read data and the re-read data match. | 2013-11-14 |
20130305005 | MANAGING CIRTUAL HARD DRIVES AS BLOBS - Cloud computing platforms having computer-readable media that perform methods for facilitating communications with storage. A request having a first-interface format to access storage is intercepted. The first interface format of the request supports access to a virtual hard drive (VHD). The request is translated to a blob request having a blob interface format. The blob interface format of the blob request supports access to a plurality of blobs of data in a blob store. The blob request is communicated to a blob interface such that the blob request is executed in managing the plurality of blobs. | 2013-11-14 |
20130305006 | METHOD, SYSTEM AND APPARATUS FOR REGION ACCESS CONTROL - Techniques and mechanisms for providing access to a storage device of a computer platform. In an embodiment, an agent executing on the platform may be registered for access to the storage device, the agent being allocated a memory space by a host operating system of the platform. Registration of the agent may result in a location in the allocated memory space being mapped to a location in the storage device. In another embodiment, the agent may write to the location in the allocated memory space to request access to the storage device, wherein the request is independent of any system call to the host OS which describes the requested access. | 2013-11-14 |
20130305007 | MEMORY MANAGEMENT METHOD, MEMORY MANAGEMENT DEVICE, MEMORY MANAGEMENT CIRCUIT - A memory management method includes extracting a physical address in which an error has been detected from a conversion table. The memory management method includes extracting, when a physical address that indicates a storage area that stores therein information that is to be deleted due to the occurrence of the detected error is acquired from the information processing apparatus, the memory address associated with the acquired physical address from the conversion table, performed by the memory management device. The memory management method includes updating the conversion table such that the extracted memory address is associated in the conversion table with the extracted physical address, performed by the memory management device. The memory management method includes moving the information stored in the storage area indicated by the extracted physical address to the storage area indicated by the extracted memory address. | 2013-11-14 |
20130305008 | MEMORY OPERATION TIMING CONTROL METHOD AND MEMORY SYSTEM USING THE SAME - A method of controlling operation timing of memory devices included in a storage apparatus and a memory system including the method. The method includes adjusting operation timing such that a number of memory devices that simultaneously perform operations is below a reference value according to a host request, and issuing operations according to the adjusted operation timing and transferring the issued operations to the memory devices. | 2013-11-14 |
20130305009 | VIRTUAL MEMORY STRUCTURE FOR COPROCESSORS HAVING MEMORY ALLOCATION LIMITATIONS - One embodiment sets forth a technique for dynamically allocating memory during multi-threaded program execution for a coprocessor that does not support dynamic memory allocation, memory paging, or memory swapping. The coprocessor allocates an amount of memory to a program as a put buffer before execution of the program begins. If, during execution of the program by the coprocessor, a request presented by a thread to store data in the put buffer cannot be satisfied because the put buffer is full, the thread notifies a worker thread. The worker thread processes a notification generated by the thread by dynamically allocating a swap buffer within a memory that cannot be accessed by the coprocessor. The worker thread then pages the put buffer into the swap buffer during execution of the program to empty the put buffer, thereby enabling threads executing on the coprocessor to dynamically receive memory allocations during execution of the program. | 2013-11-14 |
20130305010 | DIFFERENTIAL DELAY COMPENSATION - In one embodiment, a method comprises receiving a plurality of data frames representing at least one virtually concatenated data stream, storing the plurality of data frames in a memory; and recording, for each of a plurality of data frames, a physical write address that indicates a position in the memory and a virtual write address that includes a multiframe indicator and a byte number indicator. | 2013-11-14 |
20130305011 | PERFORMING A CYCLIC REDUNDANCY CHECKSUM OPERATION RESPONSIVE TO A USER-LEVEL INSTRUCTION - In one embodiment, the present invention includes a method for receiving incoming data in a processor and performing a checksum operation on the incoming data in the processor pursuant to a user-level instruction for the checksum operation. For example, a cyclic redundancy checksum may be computed in the processor itself responsive to the user-level instruction. Other embodiments are described and claimed. | 2013-11-14 |
20130305012 | IMPLEMENTATION OF COUNTERS USING TRACE HARDWARE - A multi-core computing system includes a plurality of processor cores, a counter, and a register block including a plurality of event registers coupled to the plurality of processor cores. Each of the plurality of processor cores is configured to write event records to the event registers, and the register block is configured to generate a serialized event stream including event records written to the event registers. The system further includes an event stream processor configured to receive the serialized event stream, to analyze the serialized event stream to identify a counter update event record in the serialized event stream, and to update the counter in response to the counter update event record. | 2013-11-14 |