17th week of 2016 patent applcation highlights part 41 |
Patent application number | Title | Published |
20160117193 | RESOURCE MAPPING IN MULTI-THREADED CENTRAL PROCESSOR UNITS - A processor determines that processing of a thread is suspended due to limited availability of a processing resource. The processor supports execution of the plurality of threads in parallel. The processor obtains a lock on a second processing resource that is substitutable as a resource during processing of the first thread. The second processing resource is included as part of a component that is external to the processor. The component supports a number of threads that is less than the plurality of threads. The processing of the thread is suspended until the lock is available. The processor processes the first thread using the second processing resource. The processor includes a shared register to support mapping a portion of the plurality of threads to the component. The portion of the plurality of threads is equal to, at most, the number of threads supported by component. | 2016-04-28 |
20160117194 | METHODS AND APPARATUS FOR RESOURCE MANAGEMENT CLUSTER COMPUTING - Embodiments of an event-driven resource management technique may enable the management of cluster resources at a sub-computer level (e.g., at the thread level) and the decomposition of jobs at an atomic (task) level. A job queue may request a resource for a job from a resource manager, which may locate a resource in a resource list and grant the resource to the job queue. After the resource is granted, the job queue sends the job to the resource, on which the job may be partitioned into tasks and from which additional resources may be requested from the resource manager. The resource manager may locate additional resources in the list and grant the resources to the resource. The resource sends the tasks to the granted resources for execution. As resources complete their tasks, the resource manager is informed so that the status of the resources in the list can be updated. | 2016-04-28 |
20160117195 | FACILITATING ELASTIC ALLOCATION OF ORGANIZATION-SPECIFIC QUEUE RESOURCES IN AN ON-DEMAND SERVICES ENVIRONMENT - In accordance with embodiments, there are provided mechanisms and methods for facilitating elastic allocation of tenant-specific queue resources in an on-demand services environment in a multi-tenant environment according to one embodiment. In one embodiment and by way of example, a method includes allocating resources to a plurality of tenants, identifying, in runtime, one or more offending tenants of the plurality of tenants and one or more victim tenants of the plurality of tenants. The one or more offending tenants consume above their allocated share of the resources within a message type, and the one or more victim tenants consume below their allocated share of the resources or none of the resources within the message type. The method may further include isolating, in runtime, the offending tenants and the victim tenants, and routing, in runtime, each tenant of the offending tenants and the victim tenants to a queue dedicated to the tenant and the message type. | 2016-04-28 |
20160117196 | LOG ANALYSIS - Log analysis can include transferring compiled log analysis code, executing log analysis code, and performing a log analysis on the executed log analysis code. | 2016-04-28 |
20160117197 | Method, Apparatus, and System for Issuing Partition Balancing Subtask - A method, an apparatus, and a system are provided for issuing a partition balancing subtask, which are applied to a controller. After receiving a second partition balancing task, the controller generates a second partition balancing subtask set, where the second partition balancing subtask set includes at least one partition balancing subtask, and each partition balancing subtask records a migration partition, a node to which the migration partition belongs, and a destination node; searches a current partition balancing subtask set, and deletes a repeated partition balancing subtask between the second partition balancing subtask set and the current partition balancing subtask set; and issues remaining partition balancing subtasks after the repeated partition balancing subtask is deleted to the destination node recorded in each partition balancing subtask. | 2016-04-28 |
20160117198 | LOAD DISTRIBUTION APPARATUS, LOAD DISTRIBUTION METHOD, STORAGE MEDIUM, AND EVENT-PROCESSING SYSTEM - This invention implements appropriate load distribution in an event-processing system, that includes: a plurality of event generators that generate events and transmit the events to an allocation apparatus, and a plurality of allocation apparatuses that receive events from one or a plurality of event generators and transmit the received events to a processing apparatus. The load distribution apparatus includes an acquiring unit that is configured to acquire a reception status, or a transmission status, these status representing information about receiving or transmitting of the events. The load distribution apparatus also includes an updating unit that is configured to update the allocation apparatus specified for the specific event generator to another allocation apparatus, on the basis of the reception status or the transmission status, so that a load applied to the allocation apparatus is leveled among the plurality of allocation apparatuses. | 2016-04-28 |
20160117199 | COMPUTING SYSTEM WITH THERMAL MECHANISM AND METHOD OF OPERATION THEREOF - A computing system includes: a monitoring block configured to calculate a present power for each of multiple resource units; a thermal block, coupled to the monitoring block, configured to dynamically calculate a thermal candidate set based on the present power, the thermal candidate set for representing a present thermal load for the multiple resource units; and a target block, coupled to the thermal block, configured to determine a target resource based on the thermal candidate set for performing a target task using the target resource. | 2016-04-28 |
20160117200 | RESOURCE MAPPING IN MULTI-THREADED CENTRAL PROCESSOR UNITS - A processor determines that processing of a thread is suspended due to limited availability of a processing resource. The processor supports execution of the plurality of threads in parallel. The processor obtains a lock on a second processing resource that is substitutable as a resource during processing of the first thread. The second processing resource is included as part of a component that is external to the processor. The component supports a number of threads that is less than the plurality of threads. The processing of the thread is suspended until the lock is available. The processor processes the first thread using the second processing resource. The processor includes a shared register to support mapping a portion of the plurality of threads to the component. The portion of the plurality of threads is equal to, at most, the number of threads supported by component. | 2016-04-28 |
20160117201 | LINKING A FUNCTION WITH DUAL ENTRY POINTS - A method for a static linker to resolve a function call can include identifying, during link time, a first function call of a calling function to a callee function, determining whether the callee function is a local function, determining whether the callee function has a plurality of entry points, and whether an entry point of the plurality of entry points is a local entry point. The method can include resolving, during link time, the first function call to enter the local entry point, which can include replacing a symbol for the function in the first function call with an address of the local entry point during link time. If the callee function cannot be determined to be a local function, the method can include generating stub code and directing the first function call to enter the stub code during link time. | 2016-04-28 |
20160117202 | PRIORITIZING SOFTWARE APPLICATIONS TO MANAGE ALERTS - In an example embodiment, a priority status for one or more of the plurality of software applications is determined. A mapping is then created between the one or more of the plurality of software applications and each corresponding priority status. Then, based on the mapping, one or more alerts received from the plurality of software applications are suppressed. | 2016-04-28 |
20160117203 | System and Methods of Communicating Events Between Multiple Applications - A system and methods of communicating events includes detecting an event at a first embedded application, the first embedded application being embedded in an application; triggering the detected event on an event aggregator of the application; determining, by the application, whether a second embedded application is embedded in the application; and if a second embedded application is determined to be embedded in the application, transmitting the detected event from the application to the second embedded application. | 2016-04-28 |
20160117204 | System and Methods of Communicating Events Between Multiple Applications - A system and methods of communicating events includes detecting, on a code space of an application, an event at the application; transmitting the detected event from the application to an embedded application, the embedded application being embedded in the application; and triggering the detected event on an event aggregator of the embedded application based upon data associated with the detected event. | 2016-04-28 |
20160117205 | TECHNIQUES TO COMPUTE ATTRIBUTE RELATIONSHIPS UTILIZING A LEVELING OPERATION IN A COMPUTING ENVIRONMENT - Various embodiments include a system having interfaces, storage devices, memory, and processing circuitry. The system may be coupled with one or more storage devices and may receive episode information for a patient from a storage device via one or more wired or wireless links, the episode information includes a plurality of episodes associated with the patient, each of the plurality of episodes is a specific instance of a medical condition. The system may generate a candidate episode pairs list comprising a plurality of candidate episode pairs. Embodiments may also include the system generating a transition list comprising episode pairs from the plurality of candidate episode pairs in the candidate episode pairs list and determining attribute relationships between the plurality of episodes for the patient based on episode pairs in the transition list, the attribute relationships used to attribute items between the plurality of episodes. | 2016-04-28 |
20160117206 | METHOD AND SYSTEM FOR BLOCK SCHEDULING CONTROL IN A PROCESSOR BY REMAPPING - A method and a system for block scheduling are disclosed. The method includes retrieving an original block ID, determining a corresponding new block ID from a mapping, executing a new block corresponding to the new block ID, and repeating the retrieving, determining, and executing for each original block ID. The system includes a program memory configured to store multi-block computer programs, an identifier memory configured to store block identifiers (ID's), management hardware configured to retrieve an original block ID from the program memory, scheduling hardware configured to receive the original block ID from the management hardware and determine a new block ID corresponding to the original block ID using a stored mapping, and processing hardware configured to receive the new block ID from the scheduling hardware and execute a new block corresponding to the new block ID. | 2016-04-28 |
20160117207 | METHOD, DEVICE, AND COMPUTER PROGRAM FOR IMPROVING ACCESS TO SERVICES IN A WEB RUNTIME ENVIRONMENT - The invention relates to processing a service request by a web runtime environment in a processing device, the processing of the service request enabling a service provider to provide a service requested in the service request. After having selected a specific interface based on the service request, a web driver application associated with the service requested in the service request is executed and the selected specific interface is implemented. Then, it is possible to interact with the web driver application, via said specific interface, for providing the service by the service provider. | 2016-04-28 |
20160117208 | IDENTIFICATION OF USER INPUT WITHIN AN APPLICATION - One embodiment provides a method, comprising: embedding, using a processor, code within an application; detecting, at an electronic device, a user input within the application, wherein the user input selects an object within the application; receiving, using a processor, data associated with the selected object; and sending, using a processor, data associated with the selected object to an application selected from the group consisting of the application and another application. Other aspects are described and claimed. | 2016-04-28 |
20160117209 | METHODS FOR ASSOCIATING STORAGE ERRORS WITH SPECIFIC THIRD PARTY ARRAYS AND DEVICES THEREOF - A method, non-transitory computer readable medium, and device that associates a storage error with a specific array includes receiving a request to display one or more storage errors associated with one or more physical storage mediums within a storage device. An error cache associated with each of the one or more physical storage mediums within the storage device is scanned to identify the one or more storage errors reported by at least one of the one or more physical storage mediums within the storage device. Based on one or more business rules, the identified one or more storage errors are checked whether they are in the required format. An error list comprising the identified one or more storage errors and their corresponding one or more physical storage mediums is provided when the identified one or more storage errors are determined to be in the required format. | 2016-04-28 |
20160117210 | Multicore Processor Fault Detection For Safety Critical Software Applications - A method for multicore processor fault detection during execution of safety critical software applications in a multicore processor environment involves dedicating the complete resources of at least a part of at least one processor core to execution of diagnostics software application whilst dedicating remaining resources to execution of a safety-critical software application, thereby enabling parallel execution of the diagnostics software application and the safety-critical software application. There is also provided a controller for multicore processor fault detection during execution of safety critical software applications in a multicore processor environment. The controller includes a multicore processor environment. The controller may be part of a control system. The method may be provided as a computer program. | 2016-04-28 |
20160117211 | ERROR TROUBLESHOOTING USING A CORRELATED KNOWLEDGE BASE - Disclosed are various embodiments for an error troubleshooting application. Error data is obtained from a client device. A correlated knowledge base is referenced to determine if a solution is associated with the error data. If a solution is associated with the error data, a notification embodying the solution is communicated to the client device. If a solution is not associated with the error data, a notification indicating the solution is unknown is communicated to the client device. | 2016-04-28 |
20160117212 | EVALUATION OF PERFORMANCE OF SOFTWARE APPLICATIONS - A method and system for evaluating performance of software applications. Steps in a first software application within a first web site are mapped to respective similar-function steps in a second software application within a second web site. Measures of performance of: each mapped step in the first software application, the respective similar-function steps in the second software application, and other steps in the second application are determined. A measure of performance of the first software application is determined, based on the measures of performance of each mapped step in the first software application. A measure of performance of the second software application is determined, based on the measures of performance of the respective similar-function steps and the other steps in the second software application. Improved performance is obtained for the and/or first software application by utilizing the measure of performance of the first and/or second software application, respectively. | 2016-04-28 |
20160117213 | DYNAMIC ADAPTIVE APPROACH FOR FAILURE DETECTION OF NODE IN A CLUSTER - The present disclosure discloses a method and a network device for failure detection of nodes in a cluster. Specifically, a network device transmits data to another device at a first time. The network device then receives an acknowledgment of the data from the second device at a second time. Next, the network device determines a Round Trip Time (RTT) for the first device and the second device based on the first time and the second time. Based on the RTT, the network device determines a first frequency for transmitting a heartbeat protocol message between the first device and the second device, and transmits a heartbeat protocol message between the first device and the second device at the first frequency. | 2016-04-28 |
20160117214 | CONTROLLER - A controller includes a microcomputer that operates in a normal mode or in a low power mode and communicates with an external device, a monitor circuit that monitors an operation state of the microcomputer based on a monitor signal output from the microcomputer, and a start circuit that controls a drive of the monitor circuit. Communication signals exchanged between the microcomputer and the external device include a dominant state and a recessive state, and the start circuit monitors the communication signals. When the communication signals in the low power mode of the microcomputer include the dominant state, the start circuit puts the monitor circuit in a monitoring state. When no monitor signal is input from the microcomputer to the monitor circuit that is operating in the monitoring state, the monitor circuit determines that an abnormality has occurred in the microcomputer. | 2016-04-28 |
20160117215 | SYSTEM AND METHOD FOR DYNAMIC BANDWIDTH THROTTLING BASED ON DANGER SIGNALS MONITORED FROM ONE MORE ELEMENTS UTILIZING SHARED RESOURCES - A method and system for adjusting bandwidth within a portable computing device based on danger signals monitored from one on more elements of the portable computing device are disclosed. A danger level of an unacceptable deadline miss (“UDM”) element of the portable computing device may be determined with a danger level sensor within the UDM element. Next, a quality of service (“QoS”) controller may adjust a magnitude for one or more danger levels received based on the UDM element type that generated the danger level and based on a potential fault condition type associated with the particular danger level. The danger levels received from one UDM element may be mapped to at least one of another UDM element and a non-UDM element. A quality of service policy for each UDM element and non-UDM element may be mapped in accordance with the danger levels. | 2016-04-28 |
20160117216 | TEMPERATURE RELATED ERROR MANAGEMENT - Apparatuses and methods for temperature related error management are described. One or more apparatuses for temperature related error management can include an array of memory cells and a write temperature indicator appended to at least one predetermined number of bytes of the stored data in the array of memory cells. The apparatuses can include a controller configured to determine a numerical temperature difference between the write temperature indicator and a read temperature indicator and determine, from stored operations, an error management operation for the stored data based, at least in part, on comparison of the numerical temperature difference to a temperature difference threshold. | 2016-04-28 |
20160117217 | APPARATUS AND A METHOD OF DETECTING ERRORS ON REGISTERS - An error detection circuit on a semiconductor chip detects whether soft errors have affected flip-flop implemented registers on the semiconductor chip. A signature of these flip-flop implemented registers on the semiconductor chip is periodically captured. The signature allows for the integrity of the flip-flop implemented registers to be constantly monitored. A soft error occurring on any of the flip-flop implemented registers can be immediately detected. In response to the detection, an interrupt is raised to notify software to take action. | 2016-04-28 |
20160117218 | MONITORING DATA ERROR STATUS IN A MEMORY - A method for outputting data error status of a memory device includes generating a data status indication code indicating error status of a data chunk transmitted by a memory controller, combining the data status indication code with the data chunk to generate an output signal, and outputting the output signal to a data bus pin. | 2016-04-28 |
20160117219 | DEVICE, SYSTEM AND METHOD TO RESTRICT ACCESS TO DATA ERROR INFORMATION - Techniques and mechanisms to provide selective access to data error information by a memory controller. In an embodiment, a memory device stores a first value representing a baseline number of data errors determined prior to operation of the memory device with the memory controller. Error detection logic of the memory device determines a current count of data errors, and calculates a second value representing a difference between the count of data errors and the baseline number of data errors. The memory device provides the second value to the memory controller, which is unable to identify that the second value is a relative error count. In another embodiment, the memory controller is restricted from retrieving the baseline number of data errors. | 2016-04-28 |
20160117220 | DATA STORAGE DEVICE AND ERROR CORRECTION METHOD CAPABLE OF ADJUSTING VOLTAGE DISTRIBUTION - The present invention provides a data storage device including a flash memory and a controller. The controller is configured to perform a first read operation to read a first page corresponding to a first word line of the flash memory according to a read command of a host, and perform a distribution-adjustment procedure when data read by the first read operation cannot be recovered by coding, wherein the controller is further configured to perform an adjustable read operation to read a second page corresponding to a second word line of the flash memory in the distribution-adjustment procedure. | 2016-04-28 |
20160117221 | ERROR DETECTION AND CORRECTION UTILIZING LOCALLY STORED PARITY INFORMATION - A processing system includes a memory coupled to a processor device. The memory stores data blocks, with each data block having a separate associated checksum value stored along with the data block in the memory. The processor device has a storage location that stores parity information for the data blocks, with the parity information having a plurality of parity blocks. Each parity block represents a parity of a corresponding set of data blocks. The parity blocks can be accessed for use in error detection and correction schemes used by the processing system. | 2016-04-28 |
20160117222 | TIME MULTIPLEXED REDUNDANT ARRAY OF INDEPENDENT TAPES - Embodiments relate to a computer system for storing data on a time multiplexed redundant array of independent tapes. An aspect includes a memory device that buffers data received by the computer system to be written to a set of tape data storage devices. The data is written to the set of tape data storage devices in blocks that form parity stripes across the set of tape data storage device. Aspects further includes a tape drive that writes data to one of the set of tape data storage devices at a time in a tape-sequential manner and a processor that computes a parity value for each of the parity stripes. The tape drive writes the parity values for each of the parity stripes to a last subset of tapes of the set of tape data storage devices. | 2016-04-28 |
20160117223 | METHOD FOR CONCURRENT SYSTEM MANAGEMENT AND ERROR DETECTION AND CORRECTION REQUESTS IN INTEGRATED CIRCUITS THROUGH LOCATION AWARE AVOIDANCE LOGIC - A method of incorporating active error correction inside a memory device is used, whereby memory scrub cycles can be completely hidden from an end user. The method simplifies the design of the memory interface and simplifies the data integrity management unit for the end user. An arbitration unit is implemented to allow concurrent processing of primary (user) and secondary (scrub) requests. The arbitration unit is location aware in context to the primary interface and is responsible for eliminating overlapping memory requests. | 2016-04-28 |
20160117224 | COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN ANALYSIS PROGRAM, ANALYSIS APPARATUS, AND ANALYSIS METHOD - An analysis method including: storing information on modules through which each processing passes with respect to each of a plurality of processings in which shared modules exist; determining a normal or abnormal state of each of the processings which are performed during a predetermined time interval based on log information related to the plurality of processings which are performed during the predetermined time interval; correcting the information on the modules according to each of the processings which are performed during the predetermined time interval, based on a predetermined condition, when an abnormal module is not identified in a process of identifying the abnormal module by using a determination result of the normal or abnormal state and the information on the modules according to each of the processings; and identifying the abnormal module by using the determination result and the corrected information on the modules. | 2016-04-28 |
20160117225 | MOBILE FLASH STORAGE BOOT PARTITION AND/OR LOGICAL UNIT SHADOWING - Embodiments of the inventive concept include computer-implemented method for shadowing one or more boot images of a mobile device. The technique can include duplicating boot images to shadow partitions in a user area of a non-volatile memory device such as a flash memory. The technique can include detecting boot image corruption, and causing a mobile device to boot from the shadow partitions. The technique can include dynamically shadowing and releasing blocks used by the shadow partitions. The technique can include boot failure recovery and bad image preservation through firmware flash translation layer (FTL) logical to physical mapping updates. Boot image corruption failures can be recovered from and/or debugged using the shadow partitions. | 2016-04-28 |
20160117226 | DATA RECOVERY TECHNIQUE FOR RECOVERING DATA FROM AN OBJECT STORE - A system, method, and computer program product for a block-based backing up a storage device to an object storage service is provided. This includes the generation of a data object that encapsulates a data of a data extent. The data extent covers a block address range of the storage device. The data object is named with a base name that represents a logical block address (LBA) of the data extent. The base name is appended with an identifier that deterministically identifies a recovery point that the data object is associated with. The base name combined with the identifier represents a data object name for the data object. The named data object is then transmitted to the object storage service for backup of the data extent. At an initial backup, the full storage device is copied. In incremental backups afterwards, only those data extents that changed are backed up. | 2016-04-28 |
20160117227 | DATA RECOVERY TECHNIQUE FOR RECOVERING DATA FROM AN OBJECT STORAGE SERVICE - A system and method for recovering data backed up to an object store are provided. In some embodiments, the method includes identifying an address space of a data set to be recovered. A set of data objects stored by an object-based system is identified that corresponds to the address space and a selected recovery point. The identified set of data objects is retrieved, and data contained in the retrieved set of data objects is stored to at least one storage device at a block address determined by the retrieved set of data objects to recreate the address space. In some embodiments, the set of data objects is retrieved by providing an HTTP request and receiving the set of data objects as an HTTP response. In some embodiments, the set of data objects are retrieved based on the data objects being the target of a data transaction. | 2016-04-28 |
20160117228 | Point in Time Database Restore from Storage Snapshots - Archiving a database and point in time recovery of the database. A method includes taking a first snapshot of a database. The first snapshot of the database includes a first snapshot of the data in the data storage and a first snapshot of the log records in the log storage. The method further includes taking a second snapshot of the database. The second snapshot of the database includes a second snapshot of the data in data storage and a second snapshot of the log records. The method further includes restoring the database to a particular point by applying the first snapshot of the data in the data storage to the database, applying the first snapshot of the log records in the log storage to the database and applying a portion of the second snapshot of the log records in the log storage to the database. | 2016-04-28 |
20160117229 | SELECTIVE ACCESS TO EXECUTABLE MEMORY - In an embodiment, a data processing method comprises: in a computer executing a supervisor program, the supervisor program establishing different memory access permissions comprising any combination of read, write, and execute permissions for one or more different regions of memory of a first domain, receiving a request from a process to execute a particular memory page of the regions of memory, the particular memory page comprising a memory access permission set to read-writeable or read-only, throwing an execute fault for the particular memory page, performing one or more responsive actions to restore execution access or content of the particular memory page, and after performing the one or more responsive actions, setting the memory access permission to execute only. | 2016-04-28 |
20160117230 | HIGH AVAILABILITY SCHEDULER FOR SCHEDULING SEARCHES OF TIME STAMPED EVENTS - A high availability scheduler of tasks in a cluster of server devices is provided. A server device of the cluster of server devices enters a leader state based upon the results of a consensus election process in which the server device participates with others of the cluster of server devices. Upon entering the leader state, the server device schedules one or more tasks by assigning each of the one or more tasks to a device, wherein the one or more tasks involve initiating a search of time stamped events. | 2016-04-28 |
20160117231 | Complex Network Modeling For Disaster Recovery - A cloud based method and system for the backup and recovery of a computer or computer system is provided with the ability to determine a network model that emulates the network environment of the computer or computer system being backed up. Should a disaster event occur, the network model is used by a disaster recovery computer to construct a virtual network environment that emulates the network environment of the backed up computer or computer system. | 2016-04-28 |
20160117232 | STORAGE DEVICE - A storage device of an embodiment includes a voltage measurement unit that measures a voltage of power supplied from a host, a volatile memory, a non-volatile memory including a saving area and a normal area, a data compression and decompression unit, and a controller. The controller includes a power-supply voltage determining unit which compares the voltage measured by the voltage measurement unit to a predetermined threshold value, a data saving unit which writes compression user data obtained by compressing user data by the data compression and decompression unit in the saving area when the voltage is less than the predetermined threshold value and the user data is included in the volatile memory, and a data rewriting unit which writes the compression user data that is decompressed in the normal area when the compression user data is included in the saving area at the time of supplying the power. | 2016-04-28 |
20160117233 | Quasi Disk Drive For Testing Disk Interface Performance - Embodiments relate to diagnostic evaluation of hardware components of a computer machine. A conventional storage device is replaced with a modified storage device. Read and write operations are received by the modified storage device. Issuance of a response to the read and write operations is limited to an acknowledgement receipt, which is employed to evaluate performance and/or bandwidth of the machines with respect to hardware for data storage. | 2016-04-28 |
20160117234 | REAL-TIME HIERARCHICAL PROTOCOL DECODING - Real-time USB class level decoding is disclosed. In some embodiments, a first packet associated with a USB class level operation associated with a target USB device that is being monitored is received. A second packet generated by a USB hardware analyzer configured to observe USB traffic associated with the target USB device is received. It is determined based at least in part on a time associated with one or both of the first packet and the second packet that the class level operation has timed out. | 2016-04-28 |
20160117235 | SOFTWARE AUTOMATION AND REGRESSION MANAGEMENT SYSTEMS AND METHODS - An automation and regression management method for testing software in a highly-complex cloud-based system with a plurality of nodes, through an automation and regression management system, includes receiving a plurality of requests for automated test runs on nodes in the highly-complex cloud-based system; managing the plurality of requests by either starting an automated test run on a node or queuing the automated test run if another automated test run is already operating on the node; determining details of each of the automated test runs subsequent to completion; storing the details of each of the automated test runs in a database; and providing the details of each of the automated test runs to a requesting user. | 2016-04-28 |
20160117236 | INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM - An information processing apparatus comprises: an insertion unit that inserts, into a class file that corresponds to one application, a first bytecode for tallying information about a resource used by an object generated by execution of a bytecode that is included in the class file; and a tallying unit that, if an application generates an object, tallies information about a resource used by the object generated by the application, wherein the one application that has executed the first bytecode is identified by execution of the first bytecode, the one application thus identified and information about a resource used by a generated object are registered to a storage unit in association with each other, and the tallying unit tallies a resource usage amount for each application based on the information registered to the storage unit. | 2016-04-28 |
20160117237 | SYSTEMS AND/OR METHODS FOR MONITORING LIVE SOFTWARE - Certain example embodiments described herein relate to techniques for observing an internal state of a software application executing in a runtime environment. For instance, certain example embodiments include traversing a structure of multiple live data objects in the executing software application; generating a graph of shadow data objects based on the traversing, with each shadow data object of the graph corresponding to a live data object of the executing software application; and providing access to the generated shadow graph via a user interface. | 2016-04-28 |
20160117238 | PREDICTIVE APPROACH TO ENVIRONMENT PROVISIONING - Embodiments of the present invention provide methods, systems, and computer program products for building an environment. Embodiments of the present invention can be used to allocate resources and build an environment such that the environment is built when a user is prepared to test one or more portions of code in the environment. Embodiments of the present invention can be used to reduce the “lag time” developers experience between waiting for the code to be built and for resources to be provisioned, and can also provide a less costly alternative to maintaining and operating dedicated environments. | 2016-04-28 |
20160117239 | GENERATING AN EVOLVING SET OF TEST CASES - A method, system and computer program product for defining an evolving set of test cases for testing software applications. In an embodiment, the method comprises identifying a set of criteria for the test cases; assigning a weight to each of the criteria; and for each of a multitude of test cases, assigning a value to each of the criteria, and determining a criteria score for the test case based on the values assigned to the criteria for the test case and the weights assigned to the criteria. Each of the test cases is assigned to one of a plurality of groups based on the criteria scores. Each of the groups of test cases is associated with one of a plurality of testing procedures, and one of those procedures is selected to test a software application using the group of test cases associated with that selected testing procedure. | 2016-04-28 |
20160117240 | PERFORMING SECURE ADDRESS RELOCATION WITHIN A MULTI-PROCESSOR SYSTEM SHARING A SAME PHYSICAL MEMORY CHANNEL TO EXTERNAL MEMORY - In response to receiving a selection to override an existing memory allocation of one or more regions of an external memory device within a memory register for a particular bridge from among multiple bridges within an integrated circuit, wherein the multiple bridges connect to a shared physical memory channel to the external memory device, a remap controller of the particular bridge reads, from a super rank register, one or more super rank values specifying one or more relocation regions of the external memory device connected to an interface of the integrated circuit. The remap controller remaps the memory register for the particular bridge with the one or more super rank values specified in the super rank register to relocate memory accesses by the bridge to the one or more relocation regions of the external memory device. | 2016-04-28 |
20160117241 | METHOD FOR USING SERVICE LEVEL OBJECTIVES TO DYNAMICALLY ALLOCATE CACHE RESOURCES AMONG COMPETING WORKLOADS - A method, device, and non-transitory computer readable medium that dynamically allocates cache resources includes monitoring a hit or miss rate of a service level objective for each of a plurality of prior workloads and a performance of each of a plurality of cache storage resources. At least one configuration for the cache storage resources for one or more current workloads is determined based at least on a service level objective for each of the current workloads, the monitored hit or miss rate for each of the plurality of prior workloads and the monitored performance of each of the plurality of cache storage resources. The cache storage resources are dynamically partitioned among each of the current workloads based on the determined configuration. | 2016-04-28 |
20160117242 | OPTIMIZATION OF NON-VOLATILE MEMORY IN MESSAGE QUEUING - Embodiments of the invention provide for the optimization of utilization of non-volatile memory in message queuing. In an embodiment of the invention, a method for optimizing utilization of non-volatile memory in message queuing includes receiving a new message in a message queueing system implemented in a host computing system. The method also includes storing the new message as a master message in non-volatile memory of the host computing system. The method yet further includes subsequently receiving different messages that each share redundant information with the master message. The method even yet further includes delta encoding each of the different messages and storing the delta encoded different messages in the non-volatile memory. Finally, the method includes deleting the master message from the non-volatile memory only once each of the different messages and the master message have been acknowledged by at least one consumer subscribing to the message queuing system. | 2016-04-28 |
20160117243 | OPTIMIZATION OF NON-VOLATILE MEMORY IN MESSAGE QUEUING - Embodiments of the invention provide for the optimization of utilization of non-volatile memory in message queuing. In an embodiment of the invention, a method for optimizing utilization of non-volatile memory in message queuing includes receiving a new message in a message queueing system implemented in a host computing system. The method also includes storing the new message as a master message in non-volatile memory of the host computing system. The method yet further includes subsequently receiving different messages that each share redundant information with the master message. The method even yet further includes delta encoding each of the different messages and storing the delta encoded different messages in the non-volatile memory. Finally, the method includes deleting the master message from the non-volatile memory only once each of the different messages and the master message have been acknowledged by at least one consumer subscribing to the message queuing system. | 2016-04-28 |
20160117244 | DATA WRITING METHOD, MEMORY CONTROL CIRCUIT UNIT AND MEMORY STORAGE APPARATUS - A data writing method for a rewritable non-volatile memory module is provided. The method includes: compressing data to generate first data; determining whether a data length of the first data meets a predetermined condition. The method also includes: if the data length of the first data meets the predetermined condition, writing the first data into a first physical erasing unit among a plurality of physical erasing units; if the data length of the first data does not meet the predetermined condition, generating dummy data according to a predetermined rule, padding the first data with the dummy data to generate second data and writing the second data into the first physical erasing unit. A data length of the second data meets the predetermined condition. | 2016-04-28 |
20160117245 | APPARATUS, A SYSTEM, A METHOD AND A COMPUTER PROGRAM FOR ERASING DATA STORED ON A STORAGE DEVICE - An approach for erasing data being stored in a data storage apparatus is provided, which may be provided e.g. as an apparatus, as a method, as a system or as a computer program. A sequence of uncompressible data is obtained fulfilling predetermined criteria, which includes a statistical measure indicative of compressibility or uncompressibility of the sequence of uncompressible data meeting a predetermined criterion, wherein the sequence of uncompressible data is divided into one or more blocks of uncompressible data, the sum of the sizes of the one or more blocks of uncompressible data being larger than or equal to the storage capacity of the data storage apparatus. The one or more blocks of uncompressible data is provided to the data storage apparatus for storage therein to overwrite the data currently stored in the data storage apparatus. | 2016-04-28 |
20160117246 | METHOD AND APPARATUS FOR CROSS-CORE COVERT CHANNEL - Passing messages between two virtual machines that use a single multicore processor having inclusive cache includes using a cache-based covert channel. A message bit in a first machine is interpreted as a lowest level cache flush. The cache flush in the first machine clears a L1 level cache in the second machine because of the inclusiveness property of the multicore processor cache. The second machine reads its cache and records access time. If the access time is long, then the cache was previously cleared and a logical 1 was sent by the first machine. A short access time is interpreted as a logical 0 by the second machine. By sending many bits, a message can be sent from the first virtual machine to the second virtual machine via the cache-based covert channel without using non-cache memory as a covert channel. | 2016-04-28 |
20160117247 | COHERENCY PROBE RESPONSE ACCUMULATION - A processor accumulating coherency probe responses, thereby reducing the impact of coherency messages on the bandwidth of the processor's communication fabric. A probe response accumulator is connected to a processing module of the processor, the processing module having multiple processor cores and associated caches. In response to a coherency probe, the processing module generates a different coherency probe response for each of the caches. The probe response accumulator combines the different coherency probe responses into a single coherency probe response and communicates the single coherency response over the communication fabric. | 2016-04-28 |
20160117248 | COHERENCY PROBE WITH LINK OR DOMAIN INDICATOR - A processor includes a set of processing modules, each of the processing modules including a cache and a coherency manager that keeps track of the memory addresses of data stored at the caches of other processing modules. In response to its local cache requesting access to a particular memory address or other triggering event, the coherency manager generates a coherency probe. In the event that the generated coherency probe is targeted to multiple processing modules, the coherency manager includes a set of multicast bits indicating the processing modules whose caches include copies of the data targeted by the multicast probe. A transport switch that connects the processing module to the fabric communicates the coherency probe only to subset of processing modules indicated by the multicast bits. | 2016-04-28 |
20160117249 | SNOOP FILTER FOR MULTI-PROCESSOR SYSTEM AND RELATED SNOOP FILTERING METHOD - A snoop filter for a multi-processor system has a storage device and a control circuit. The control circuit manages at least a first-type entry and at least a second-type entry stored in the storage device. The first-type entry is configured to record information indicative of a first cache of the multi-processor system and first requested memory addresses that are associated with multiple first cache lines each being only available in the first cache. The second-type entry is configured to record information indicative of multiple second caches of the multi-processor system and at least a second requested memory address that is associated with a second cache line being available in each of the multiple second caches. | 2016-04-28 |
20160117250 | Apparatus and Method of Throttling Hardware Pre-fetch - Hardware based prefetching for processor systems is implemented. A prefetch unit can be provided in a cache subsystem that allocates a prefetch tracker in response to a demand request for a cache line that missed. In response to subsequent demand requests to consecutive cachelines, a confidence indicator is increased. In response to further demand misses and a confidence indicator value, a prefetch tier is increased, which allows the prefetch tracker to initiate prefetch requests for more cachelines. Requests for cachelines that are more than two cachelines apart within a match window for the allocated prefetch tracker decreases the confidence faster than requests for consecutive cachelines increase confidence. An age counter tracks when a last demand request within the match window was received. The prefetch tier can be decreased in response to reduced confidence and increased age. | 2016-04-28 |
20160117251 | MANAGING METHOD FOR CACHE MEMORY OF SOLID STATE DRIVE - A managing method for a cache memory of a solid state drive includes the following steps. When the solid state drive decides to perform a garbage collection, a storing space of the cache memory is divided into plural storing portions according to at least one of the command type of an access command, access data size of the access command and the drive free space. A first storing portion of the cache memory is set as a buffering unit for a garbage collecting purpose. A second storing portion of the cache memory is set as a buffering unit for a writing purpose. | 2016-04-28 |
20160117252 | Processing of Un-Map Commands to Enhance Performance and Endurance of a Storage Device - A storage device and method enable processing of un-map commands. In one aspect, the method includes (1) determining whether a size of an un-map command satisfies (e.g., is greater than or equal to) a size threshold, (2) if the size of the un-map command satisfies the size threshold, performing one or more operations of a first un-map process, wherein the first un-map process forgoes (does not include) saving a mapping table to non-volatile memory of a storage device, and (3) if the size of the un-map command does not satisfy the size threshold, performing one or more operations of a second un-map process, wherein the second un-map process forgoes (does not include) saving the mapping table to non-volatile memory of the storage device and forgoes (does not include) flushing a write cache to non-volatile memory of the storage device. | 2016-04-28 |
20160117253 | Method for Improving Mixed Random Performance in Low Queue Depth Workloads - Systems, methods and/or devices are used to enable improving mixed random performance in low queue depth workloads in a storage device (e.g., comprising a plurality of non-volatile memory units, such as one or more flash memory devices). In one aspect, the method includes (1) maintaining a write cache corresponding to write commands from a host, (2) determining a workload in accordance with commands from the host, (3) in accordance with a determination that the workload is a non-qualifying workload, scheduling a regular flush of the write cache, and (4) in accordance with a determination that the workload is a qualifying workload, scheduling an optimized flush of the write cache. | 2016-04-28 |
20160117254 | CACHE OPTIMIZATION TECHNIQUE FOR LARGE WORKING DATA SETS - A system and method for recognizing data access patterns in large data sets and for preloading a cache based on the recognized patterns is provided. In some embodiments, the method includes receiving a data transaction directed to an address space and recording the data transaction in a first set of counters and in a second set of counters. The first set of counters divides the address space into address ranges of a first size, whereas the second set of counters divides the address space into address ranges of a second size that is different from the first size. One of a storage device or a cache thereof is selected to service the data transaction based on the first set of counters, and data is preloaded into the cache based on the second set of counters. | 2016-04-28 |
20160117255 | DEVICE HAVING A CACHE MEMORY - A device has a cache memory for temporarily storing contents of a buffer memory. The device has a mirror unit coupled between the cache memory and the buffer memory. The mirror unit is arranged for providing at least two buffer mirrors at respective different buffer mirror address ranges in the main address range by adapting the memory addressing. Due to the virtual mirrors data on a respective address in any of the respective different buffer mirror address ranges is the data of the buffer memory at a corresponding address in the buffer address range. The device enables processing of a subsequent set of data in the buffer memory via the cache memory without invalidating the cache by switching to a different buffer mirror. | 2016-04-28 |
20160117256 | NONVOLATILE MEMORY DEVICES AND METHODS OF CONTROLLING THE SAME - At least one example embodiment discloses a method of controlling a nonvolatile memory device including a plurality of blocks, each block including a plurality of physical pages. The method includes receiving a plurality of logical pages associated with a first plurality of logical addresses, respectively, and writing the first plurality of logical pages to the plurality physical addresses according to an ascending order of the logical addresses of the first plurality of logical pages. | 2016-04-28 |
20160117257 | HARDWARE-BASED ARRAY COMPRESSION - Technologies are generally described herein for compressing an array using hardware-based compression and performing various instructions on the compressed array. Some example technologies may receive an instruction adapted to access an address in an array. The technologies may determine whether address is compressible. If the address is compressible, then the technologies may determine a compressed address of a compressed array based on the address. The compressed array may represent a compressed layout of the array where a reduced size of each compressed element in the compressed array is smaller than an original size of each element in the array. The technologies may access the compressed array at the compressed address in accordance with the instruction. | 2016-04-28 |
20160117258 | SEAMLESS APPLICATION ACCESS TO HYBRID MAIN MEMORY - A command from an application is received to access a data structure associated with one or more virtual addresses mapped to main memory. A first subset of the virtual addresses for the data structure having constituent addresses that are mapped to the symmetric memory components and a second subset of the virtual addresses for the data structure having constituent addresses that are mapped to the asymmetric memory components are identified. Data associated with the virtual address from the first physical addresses and data associated with the virtual addresses from the second physical addresses are accessed. The data associated with the symmetric and asymmetric memory components is accessed by the application without providing the application with an indication of whether the data is accessed within the symmetric memory component or the asymmetric memory component. | 2016-04-28 |
20160117259 | STORAGE MANAGEMENT METHOD, STORAGE MANAGEMENT SYSTEM, COMPUTER SYSTEM, AND PROGRAM - A storage management method and the like for managing a hierarchical storage are provided. A storage management method is provided for managing a hierarchical storage including a lower storage tier, and a higher storage tier having higher speed than the lower storage tier, on a computer system including at least one computer. This storage management method includes a step of causing the computer system to copy a target data item from the higher storage tier to the lower storage tier, and a step of causing the system to determine whether or not to delete the entity of the data item on the higher storage tier having been subjected to the copying based on a time required for reading the copy of the data item. | 2016-04-28 |
20160117260 | Method and Computing Device for Encrypting Data Stored in Swap Memory - The following embodiments generally relate to the use of a “swap area” in a non-volatile memory as an extension to volatile memory in a computing device. These embodiments include techniques to use both volatile memory and non-volatile swap memory to pre-load a plurality of applications, to control the bandwidth of swap operations, to encrypt data stored in the swap area, and to perform a fast clean-up of the swap area. | 2016-04-28 |
20160117261 | RESPONSE VALIDATION MECHANISM FOR TRIGGERING NON-INVASIVE RE-TEST ACCESS OF INTEGRATED CIRCUITS - In an embodiment of the invention, response validation offers increased integrated circuit security by using a unique password or re-test key for every integrated circuit manufactured. Non-invasive re-test of an IC can be performed using an encryption input. | 2016-04-28 |
20160117262 | Hybrid Cryptographic Key Derivation - Cryptographic key management and usage is accomplished by employing a hybrid symmetric/asymmetric security context wherein seed values are associated with randomly generated cryptographic keys. A security context environment is maintained wherein cryptographic keys are reliably reproduced when needed. | 2016-04-28 |
20160117263 | STORAGE DEVICE AND CONTROL METHOD FOR STORAGE DEVICE - Key information that is currently in use is archived in a management server to prevent the key information from being lost. A storage device | 2016-04-28 |
20160117264 | SYSTEMS AND METHODS FOR PREVENTING DATA REMANENCE IN MEMORY - A system for preventing data remanence in memory is provided. The system includes a computing device, a memory chip coupled to the computing device and including memory, and a heater, the heater configured to prevent data remanence in a memory by providing heat to at least a portion of the memory. The memory includes a plurality of bits configured to electronically store data. | 2016-04-28 |
20160117265 | MAINTAINING A SECURE PROCESSING ENVIRONMENT ACROSS POWER CYCLES - Embodiments of an invention for maintaining a secure processing environment across power cycles are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive an instruction to evict a root version array page entry from a secure cache. The execution unit is to execute the instruction. Execution of the instruction includes generating a blob to contain information to maintain a secure processing environment across a power cycle and storing the blob in a non-volatile memory. | 2016-04-28 |
20160117266 | SELECTIVE MANAGEMENT OF SECURITY DATA - Security techniques may be selectively performed on data based on a classification of the data. One example technique includes receiving a memory access command specifying a target data block on a storage medium storing both security data and non-security data. The technique further includes determining whether data affected by the access command is security data. Response to such determination, one of multiple data management schemes is selected to implement the memory access command, where each of the data management schemes is adapted to implement the memory access command via a different series of processing operations to provide a different level of security protection for data affected by the memory access command. | 2016-04-28 |
20160117267 | CONCURRENT VIRTUAL TAPE USAGE - A request to access a virtual tape volume is identified and a lock status is maintained for the virtual tape volume. The lock status includes a shared status and an exclusive lock status. In shared status, it is determined whether the request includes a request for write access to the virtual tape volume. Concurrent access to the virtual tape volume can be allowed by two or more applications during the shared status based at least in part on whether the applications request for write access to the virtual tape volume. | 2016-04-28 |
20160117268 | INVENTION TITLE METHOD AND SYSTEM OF CONNECTING AND SWITCHING GROUPED INPUT AND OUTPUT DEVICES BETWEEN COMPUTERS - A system, method, and computer readable medium for switching (via a hub connection device) peripheral devices (such as a display, keyboard, mouse, audio) between a primary computing device (such as an embedded computer or a network connected server) and a secondary portable personal computing device (such as a laptop, or a smart-phone). The present invention relates generally to multi-user computing, docking stations, and embedded system on a chip computing and specifically to methods and systems for switching peripheral devices between multiple computers both for independent and/or multi-user operation. This invention enables a single set of peripherals to be used for both independent and docking station operation, increasing productivity for users of portable computing devices (through expanded peripheral access) and decreasing deployment costs for organizations (by supporting multiple use-cases via just a single set of peripherals). | 2016-04-28 |
20160117269 | SYSTEM AND METHOD FOR PROVIDING UNIVERSAL SERIAL BUS LINK POWER MANAGEMENT POLICIES IN A PROCESSOR ENVIRONMENT - One particular example implementation may include an apparatus that includes logic, at least a portion of which is in hardware, the logic configured to: determine that a first device maintains a link to a platform in a selective suspend state; assign a first latency value to the first device; identify at least one user detectable artifact when a second device exits the selective suspend state; and assign, to the second device, a second latency value that is different from the first value. | 2016-04-28 |
20160117270 | SHARING CONTENT USING A DONGLE DEVICE - A content sharing device may receive, from a content providing device, information that identifies content to be shared with a dongle device via a content sharing service. The content sharing device may receive, from the content providing device, information that identifies a contact with which the content is to be shared. The content sharing device may determine, based on the information that identifies the contact, a dongle device identifier. The dongle device identifier may include a network address associated with the dongle device. The content sharing device may provide, to the dongle device and based on determining the dongle device identifier, information that identifies the content. The information that identifies the content may cause the content to be accessible by a content receiving device connected to the dongle device. | 2016-04-28 |
20160117271 | SMART HOLDING REGISTERS TO ENABLE MULTIPLE REGISTER ACCESSES - A multiple access mechanism allows sources to simultaneously access different target registers at the same time without using a semaphore. The multiple access mechanism is implemented using N holding registers and source identifiers. The N holding registers are located in each slave engine. Each of the N holding registers is associated with a source and is configured to receive partial updates from the source before pushing the full update to a target register. After the source is finished updating the holding register and the holding register is ready to commit to the target register, a source identifier is added to a register bus. The source identifier identifies the holding register as the originator of the transaction on the register bus. The N holding registers are able to simultaneously handle N register transactions. The max value of N is 2 | 2016-04-28 |
20160117272 | PROGRAMMING INTERRUPTION MANAGEMENT - The present disclosure is related to programming interruption management. An apparatus can be configured to detect an interruption during a programming operation and modify the programming operation to program a portion of the memory array to an uncorrectable state in response to detecting the interruption. | 2016-04-28 |
20160117273 | MULTIPLE-INTERRUPT PROPAGATION SCHEME IN A NETWORK ASIC - Embodiments of the present invention are directed to a multiple-interrupt propagation scheme, which is an automated mechanism for the specification and creation of interrupts. Interrupts originating at leaf nodes of a network chip are categorized into different service levels according to their interrupt types and are propagated to a master of the network chip via a manager. For each interrupt, depending on its service level, the manager either instantaneously propagates the interrupt or delays propagation of the interrupt to the master. The master forwards the interrupts to different destinations. A destination can be a processing element that is located on the network chip or externally on a different chip. | 2016-04-28 |
20160117274 | USB PORT CONTROLLER WITH AUTOMATIC TRANSMIT RETRIES AND RECEIVE ACKNOWLEDGEMENTS - Described examples include USB controllers and methods of interfacing a host processor with one or more USB ports with the host processor implementing an upper protocol layer and a policy engine for negotiating USB power delivery parameters, in which the USB controller includes a logic circuit implementing a lower protocol layer to provide automatic outgoing data transmission retries independent of the upper protocol layer of the host processor. The controller logic circuit further implements automatic incoming data packet validity verification and acknowledgment independent of the upper protocol layer of the host processor. | 2016-04-28 |
20160117275 | APPARATUS AND METHODS FOR SERIAL INTERFACES - Apparatus and methods for serial interfaces are provided. In one embodiment, an integrated circuit operable to communicate over a serial interface is provided. The integrated circuit includes analog circuitry, registers for controlling the operation of the analog circuitry, and a distributed slave device including a primary block and a secondary block. The registers are accessible over the serial interface using a shared register address space. Additionally, the primary block is electrically connected to the serial interface and to a first portion of the registers and the secondary block is electrically connected to the primary block and to a second portion of the registers. | 2016-04-28 |
20160117276 | KVM SWITCH - A KVM (K: keyboard, V: video, M: mouse) switch to be connected between a computer, and a keyboard and a mouse, the KVM switch includes: a connector that is cascade-connected to another KVM switch; an inputter that inputs a creation instruction of control data including control information which controls the operation of the another KVM switch; a creator that creates the control data having a data format that is the same as a data format of key code data inputted from the key board or the mouse when a creation instruction of the control data is inputted, the control data including the control information and a first identifier indicative of the control data; and an outputter that outputs the created control data to the another KVM switch. | 2016-04-28 |
20160117277 | COLLABORATIVE HARDWARE INTERACTION BY MULTIPLE ENTITIES USING A SHARED QUEUE - A method for interaction by a central processing unit (CPU) and peripheral devices in a computer includes allocating, in a memory, a work queue for controlling a first peripheral device of the computer. The CPU prepares a work request for insertion in the allocated work queue, the work request specifying an operation for execution by the first peripheral device. A second peripheral device of the computer submits an instruction to the first peripheral device to execute the work request that was prepared by the CPU and thereby to perform the operation specified by the work request. | 2016-04-28 |
20160117278 | PERIPHERAL PROTOCOL NEGOTIATION - Systems and methods of operating a computing system may involve utilizing at least one of a peripheral protocol negotiation and a universal connector to determine a peripheral device protocol, and reconfiguring a computer device to accommodate that peripheral device protocol Upon such a reconfiguration, the peripheral protocol negotiation may “step aside”, and one or more subsequent communications between a host computer and the peripheral device utilizing the peripheral device protocol may start. | 2016-04-28 |
20160117279 | Dynamic Connection of PCIe Devices and Functions to an Array of Hosts - Systems and methods for connecting a device to one of a plurality of processing hosts. A virtual interface card (VIC) adapter learns the number and location of the hosts and an identification of the device; receives a mapping of the device to a selected host where in the host is selected from the plurality of hosts; and dynamically builds an interface that connects the device to the selected host. | 2016-04-28 |
20160117280 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processing apparatus that includes a plurality of ports, selectively connects one of the plurality of ports, and is interconnected to a plurality of an apparatuses via the plurality of ports, the information processing apparatus includes a memory configured to store therein zoning information indicating a connection relationship between the plurality of ports; and a processor coupled to the memory and configured to issue, based on the zoning information, at least one of a first instruction for instructing to announce in a visually confirmable manner by using an indicator arranged so as to correspond to one or more of the plurality of ports and a second instruction for instructing to transmit an announcement command signal used for requesting an apparatus coupled to the information processing apparatus to announce a port of the apparatus in a visually confirmable manner. | 2016-04-28 |
20160117281 | INFORMATION SYSTEM CAPABLE OF EXPANDING DRIVE AND BUS NUMBER ALLOCATION METHOD OF THE INFORMATION SYSTEM - In a storage device applying PCIe to a back-end network connection, in order to be capable of allocating bus numbers and making a PCIe switch expanded afterwards usable, it is necessary to once reset all PCIe switches. To dissolve this necessity, PCIe switches of the back-end network of the storage device are connected in series, a range of continuous bus numbers that are managed and stored in bus number management table is allocated for the back-end network connection, and when expanding the PCIe switch, the bus numbers are allocated in ascending order from a minimum value of the allocatable bus numbers to each of a link between the PCIe switches and to a virtual PCI bus within the PCIe switch, and the bus numbers are allocated in descending order from a maximum value of the allocatable bus numbers to the link between the PCIe switch and a drive. | 2016-04-28 |
20160117282 | TWO MODES OF A CONFIGURATION INTERFACE OF A NETWORK ASIC - Embodiments of the present invention are directed to a configuration interface of a network ASIC. The configuration interface allows for two modes of traversal of nodes. The nodes form one or more chains. Each chain is in a ring or a list topology. A master receives external access transactions. Once received by the master, an external access transaction traverses the chains to reach a target node. A target node either is an access to a memory space or is a module. A chain can include at least one decoder. A decoder includes logic that determines which of its leaves to send an external access transaction to. In contrast, if a module is not the target node, then the module passes an external access transaction to the next node coupled thereto; otherwise, if the module is the target node, the transmission of the external access transaction stops at the module. | 2016-04-28 |
20160117283 | REMOTE DIRECT MEMORY ACCESS (RDMA) OPTIMIZED HIGH AVAILABILITY FOR IN-MEMORY DATA STORAGE - A method for RDMA optimized high availability for in-memory storing of data includes receiving RDMA key-value store write requests in a network adapter of a primary computing server directed to writing data to an in-memory key-value store of the primary computing server and performing RDMA write operations of the data by the network adapter of the primary computing server responsive to the RDMA key-value store write requests. The method also includes replicating the RDMA key-value store write requests to a network adapter of a secondary computing server, by the network adapter of the primary computing server. Finally, the method includes providing address translation data for the in-memory key-value store of the primary computing server from the network adapter of the primary computing server to the network adapter of the secondary computing server. | 2016-04-28 |
20160117284 | METHODS FOR USING TEMPORAL PROXIMITY OF SOCIAL CONNECTION CREATIONS TO PREDICT PROPERTIES OF A SOCIAL CONNECTION - Aspects of the subject technology relate to a social-networking system, including one or more computers communicatively coupled via a network. In certain aspects, the computers are configured to perform operations including, receiving one or more indications that a common user has initiated a connection with each of a plurality of contacts in a social network and identifying two or more contacts, from among the plurality of contacts, that share a temporal relationship with respect to the connections formed between the common user and the respective two or more contacts in the social network. In certain implementations, the operations can further include comparing information associated with the two or more contacts to determine a likelihood that a common feature is shared by the two or more contacts. Computer-implemented methods and computer-readable media are also provided. | 2016-04-28 |
20160117285 | FINDING A CUR DECOMPOSITION - One embodiments is a computer-implemented method for finding a CUR decomposition. The method includes constructing, by a computer processor, a matrix C based on a matrix A. A matrix R is constructed based on the matrix A and the matrix C. A matrix U is constructed based on the matrices A, C, and R. The matrices C, U, and R provide a CUR decomposition of the matrix A. The construction of the matrices C, U, and R provide at least one of an input-sparsity-time CUR and a deterministic CUR. | 2016-04-28 |
20160117286 | NATURAL LANGUAGE PROCESSING-ASSISTED EXTRACT, TRANSFORM, AND LOAD TECHNIQUES - Embodiments presented herein disclose techniques for transforming input documents having disparate formats into a normalized format (e.g., Atom, RSS, HTML, customized XML, etc.). According to one embodiment, a plurality of fields is identified in an input document that has a given format. Each field includes a descriptor and text content associated with the descriptor. For each field, semantic properties are evaluated for the descriptor and text content against a plurality of mapping rules to determine whether the field is consistent with one of a plurality of fields of a target format. Each mapping rule specifies characteristics associated with one of the fields in the target format. Once so determined, a mapping from the first field to the second field is defined. | 2016-04-28 |
20160117287 | Method and Apparatus for Rendering Websites on Physical Devices - In at least one embodiment, the system and method described herein provides a concrete and tangible technical solution to the problem of activating and testing multiple web pages on multiple devices to determine any anomalies or other errors in rendering the web pages due to, for example, the unpredictable nature of rendering web pages on multiple devices having, for example, a multitude of different configuration characteristics. Furthermore, the system and method, in at least one embodiment, provide an automated process that is capable of testing web pages and devices remotely and in parallel at a high bandwidth. A system and method utilize a device service to coordinate: activating a web browser on each of the set of devices selected, rendering the web page by each web browser of the set of devices selected, and capturing one or more images of the web page as rendered by the set of devices selected. | 2016-04-28 |
20160117288 | TEARABLE DISPLAYS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for presenting content on tearable displays. One of the methods includes presenting, by a device having a tearable display, first content on the tearable display; receiving, at the device, an indication of a tear in the tearable display, the tear in the tearable display defining a first portion and a second portion of the tearable display; in response to receiving the indication, determining, by the device, a size of the second portion of the tearable display; and presenting, by the device, second content within the second portion of the tearable display including sizing the second content according to the size of the second portion of the tearable display. | 2016-04-28 |
20160117289 | Modifying Native Document Comments In A Preview - A document preview system provides previews of a native document to client devices. The previews include comments associated with native objects in the native document. The document preview system identifies bounding areas in the preview associated with the native objects, which may be identified by the rendering markers applied to the native document prior to rendering. Using the bounding areas, the document preview system identifies comments for the native document and determines the location to display the comment using native objects associated with the comment. When a new comment is received from a user for a preview of a native document, the document preview system determines native objects for the new comment that match a user's selection for placing the new comment. The new comment is inserted with the native objects in the native document. | 2016-04-28 |
20160117290 | Terminal and Information Interaction Method - The present invention provides a terminal and an information interaction method. The terminal comprises an identification generation unit and a notification unit. The identification generation unit generates a request message identification when receiving a request sent out by any group member of a social networking service group, and controls a display unit of the terminal to display the request message identification at a preset position of a page of the social networking service group. The notification unit receives a response operation performed by another group member on the request message identification, and notifies the any group member of a response message. | 2016-04-28 |
20160117291 | CONVERSION OF A PRESENTATION TO DARWIN INFORMATION TYPING ARCHITECTURE (DITA) - One embodiment of the present invention discloses a method, computer program product, and system for converting a Microsoft® PowerPoint® file to Darwin Information Typing Architecture (DITA). A document converter receiving a command from a client device to convert one or more PowerPoint slides to DITA, wherein the PowerPoint has been formatted for conversion to DITA. Starting with the first PowerPoint slide, metadata tags, PowerPoint slide and notes text, and file names of grouped images are compiled into a string parsed with DITA markup. If the next slide does not begin a new topic, then that slide's metadata tags, PowerPoint slide and notes text, and grouped image file names are compiled into a string parsed with DITA markup and appended to the previous slides string. If the next slide begins a new topic, then the string is exported to a DITA topic. This process is repeated throughout the PowerPoint presentation. | 2016-04-28 |
20160117292 | VISUAL WEB PAGE ANALYSIS SYSTEM AND METHOD - A visual web page analysis system includes an image analyzing unit, a block analyzing unit, a vision identifying unit, and an output unit. The image analyzing unit loads information of a web page and segments content of the web page into a plurality of blocks based on a visual feature. The block analyzing unit classifies the blocks based on an attribute of each block. The vision identifying unit compares at least a relative feature of each block to determine a function of each block on the web page. The output unit collects the blocks and their functions into an information interface and outputs the information interface. | 2016-04-28 |