24th week of 2017 patent applcation highlights part 44 |
Patent application number | Title | Published |
20170168869 | NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, CONTROL DEVICE, AND CONTROL METHOD - A non-transitory computer-readable storage medium storing a control program that causes a computer to execute a process, the process including obtaining, for each of a plurality of job flows to which coincident input data is inputted, data excluded from the coincident input data of each of the plurality of job flows by a data extraction process or information specifying the excluded data, each of the plurality of job flows defining a plurality of processes including the data extraction process to be executed, and determining whether the plurality of job flows whose output data are coincident each other are aggregated or not based on the excluded data or the information specifying the excluded data. | 2017-06-15 |
20170168870 | TASK STATUS TRACKING AND UPDATE SYSTEM - Aspects include a method, a system and a computer program product for providing status updates while collaboratively resolving an issue. The method includes identifying, using a processing device, one or more key phrases in an electronic text-based message. Based on the identified one or more key phrases, at least one status-based suggestion is provided to a user to change a status milestone associated with a problem resolution. The providing of the change of milestone includes: building a table to map a key phrase to one or more status identifiers; mapping the key phrase to one or more status identifiers to associate the key phrase with the at least one status-based suggestion; and displaying a corresponding status milestone based on the user selecting from the at least one status-based suggestion. | 2017-06-15 |
20170168871 | METHOD AND ELECTRONIC DEVICE FOR TRIGGERING BACKGROUND TASK - A method for triggering a background task, including: receiving a request for performing the background task on an intelligent terminal; acquiring a current performance parameter of the intelligent terminal; and triggering to perform the background task, when determining that a condition for performing the background task is satisfied, based on the current performance parameter of the intelligent terminal. | 2017-06-15 |
20170168872 | TASK SCHEDULING METHOD AND APPARATUS - Provided is a task scheduling method. The method may include: assigning a task to one of first processing units functionally connected to an electronic device; and migrating, at least partially on the basis of a performance control condition related to the task, the task to one of second processing units for processing. | 2017-06-15 |
20170168873 | METHOD, DEVICE, AND SYSTEM FOR DECIDING ON A DISTRIBUTION PATH OF A TASK - A method for deciding on a distribution path of a task includes the following steps: identifying one or more processing elements from the plurality of processing elements that are capable of processing the task, identifying one or more paths for communicating with the one or more identified processing elements, predicting a cycle length for one or more of the identified processing elements and the identified paths, selecting a preferred processing element from the identified processing elements, and selecting a preferred path from the identified paths. The method may be executed by a device or a system. | 2017-06-15 |
20170168874 | PARTIAL TASK ALLOCATION IN A DISPERSED STORAGE NETWORK - A processing system in a dispersed storage and a task (DST) network operates by receiving data and a corresponding task; identifying candidate DST execution units for executing partial tasks of the corresponding task; receiving distributed computing capabilities of the candidate DST execution units; selecting a subset of DST execution units of the candidate DST execution units to favorably execute the partial tasks of the corresponding task; determining task partitioning of the corresponding task into the partial tasks based on one or more of the distributed computing capabilities of the subset of DST execution units; determining processing parameters of the data based on the task partitioning; partitioning the tasks based on the task partitioning to produce the partial tasks; processing the data in accordance with the processing parameters to produce slice groupings; and sending the slice groupings and the partial tasks to the subset of DST execution units. | 2017-06-15 |
20170168875 | HARDWARE ACCESS COUNTERS AND EVENT GENERATION FOR COORDINATING MULTITHREADED PROCESSING - A computer system includes a hardware synchronization component (HSC). Multiple concurrent threads of execution issue instructions to update the state of the HSC. Multiple threads may update the state in the same clock cycle and a thread does not need to receive control of the HSC prior to updating its states. Instructions referencing the state received during the same clock cycle are aggregated and the state is updated according to the number of the instructions. The state is evaluated with respect to a threshold condition. If it is met, then the HSC outputs an event to a processor. The processor then identifies a thread impacted by the event and takes a predetermined action based on the event (e.g. blocking, branching, unblocking of the thread). | 2017-06-15 |
20170168876 | OPTIMIZED STREAMING IN AN UN-ORDERED INTERCONNECT - A method, system, and device provide for the streaming of ordered requests from one or more Senders to one or more Receivers over an un-ordered interconnect while mitigating structural deadlock conditions. | 2017-06-15 |
20170168877 | TENANT ENGAGEMENT SIGNAL ACQUISITION AND EXPOSURE - Tenant engagement signals are exposed to third party systems through an application programming interface (API). The third parties acquire the signals through the API, surface them, and launch workflows based on the tenant engagement signals acquired, in order to assist the tenant in the on-boarding process. | 2017-06-15 |
20170168878 | ENHANCED NOTIFICATION OF EDITING EVENTS IN SHARED DOCUMENTS - Technology is disclosed herein that enhances collaboration notifications. In various implementations, a notification queue is maintained for internal notifications that are generated as editing events that occur in relation to a shared document. The notification queue is periodically queried to determine which of the notifications qualify at a given time to be communicated externally to a group of users. An individual notification is communicated when only a single internal notification qualifies. But when multiple internal notifications are present that qualify, then a group notification is sent. Thus, users are presented with fewer notifications than otherwise, improving the user experience and conserving communication and computing resources. | 2017-06-15 |
20170168879 | Event Handling in a Cloud Data Center - Disclosed is event processing a computing center, which may include receiving events from users of the computing center to be processed. Each received event may be stored in an event queue that is associated with a customer of the user. Events in an event queue may then be processed by an event processor that is associated with that event queue. | 2017-06-15 |
20170168880 | SYSTEM HAVING IN-MEMORY BUFFER SERVICE, TEMPORARY EVENTS FILE STORAGE SYSTEM AND EVENTS FILE UPLOADER SERVICE - Computer-implemented methods and systems are provided for writing events to a data store. An application server generates events, the data store that stores the events, and a temporary events file storage system (TEFSS) temporarily stores groups of events as events files. When events are unable to be written directly to the data store, an indirect events writer is invoked that includes event capture threads each being configured to generate a particular events file, and write it to the TEFSS. Each events file includes a plurality of events flushed from an in-memory buffer service. An events file uploader service reads the events file(s) from the TEFSS, and then writes the events from each of the events files to the data store. | 2017-06-15 |
20170168881 | PROCESS CHAIN DISCOVERY ACROSS COMMUNICATION CHANNELS - A context identifier associated with an initial application program executed during an online session of a user is used to search data sources for data records associated with the user and/or the online session. Data records that are relevant to the context identifier are collected and analyzed to log all communication protocols involved during the online session. A protocol-specific monitoring tool is selected to examine each of the collected data records based on the logged communication protocols associated with each examined data record. Each examined data record is compared to reference data to identify function errors or low component performance experienced during the online session. A process chain for the online session may be constructed by parsing a first examined data record to identify specific data record elements and mapping the first examined data record to one or more second examined data records based on the identified data record elements. | 2017-06-15 |
20170168882 | EVENT MANAGEMENT IN A DATA PROCESSING SYSTEM - An event management method of attributing a seasonal fault to maintenance activity is described. The method includes identifying a sequence of fault events as a seasonal fault, and calculating an initial seasonality metric indicating a degree of seasonality of the sequence of fault events. One or more maintenance windows are identified, and then a subset of the sequence of the fault events which correspond in time with the maintenance windows are identified. A compensated seasonality metric is calculated for the sequence of fault events minus at least some of the subset of fault events. Based on determining that the compensated seasonality metric indicates a reduction in seasonality compared with the initial seasonality metric, an indication that the sequence of fault events is associated with maintenance activities is generated. | 2017-06-15 |
20170168883 | SYSTEM AND METHOD FOR TESTING CONFIGURATION AND OPERATION OF I/O DEVICES - The present invention provides methods, systems and computer program products for detecting a configuration error or operating error corresponding to an input/output (I/O) device. The I/O device comprises a plurality of I/O points configured to establish a combined I/O channel between said I/O device and a field device, said combined I/O channel comprising a primary I/O channel and at least one secondary I/O channel. | 2017-06-15 |
20170168884 | GENERIC ALARM CORRELATION BY MEANS OF NORMALIZED ALARM CODES - In an approach for identifying an incident requiring action, a processor receives a plurality of notifications from a plurality of sensors, wherein each notification is related to a problem identified by a sensor of the plurality of sensors. A processor determines an event type corresponding to each of the plurality of notifications based on a type of sensor from which a respective notification originates. A processor creates a group of notifications based on a location from which each respective notification originated and a time period during which each respective notification originated. A processor calculates a weight for each notification of the group based on the corresponding event type, wherein the weight indicates a likelihood to cause other notifications. A processor issues an incident report that includes a maintenance ticket, wherein the maintenance ticket identifies a notification within the group of a higher weight than other notifications of the group. | 2017-06-15 |
20170168885 | System and Method for Testing Internet of Things Network - The present disclosure relates to system(s) and method(s) for generating test data for testing an Internet of Things (IOT) network. Initially, the system is configured for receiving sensor ontology data of at least one sensor to be simulated for testing an Internet of things (IOT) network. The sensor ontology data may include a range of operation of the sensor and a frequency of operation of the sensor. Further, the system is configured for accepting a set of test scenarios for testing the IOT network. Furthermore, the system is configured for generating master test data for testing the IOT network, wherein the master test data comprises a set of test packages corresponding to the set of test scenario and the sensor ontology data. | 2017-06-15 |
20170168886 | Resource Leak Detection Method, Apparatus, and System - Embodiments of the present disclosure disclose a resource leak detection method, apparatus, and system that includes obtaining a target resource called when target code of a program runs, where the target code is partial code in program code, determining a first storage resource amount occupied by the target resource, determining whether the first storage resource amount occupied by the target resource satisfies a first preset condition, and if the first storage resource amount occupied by the target resource satisfies the first preset condition, determining a storage location of the target code as a resource leak location. In the embodiments of the present disclosure, the target code of the program can be tracked, and further, by means of detection, the storage location of the target code can be determined as the resource leak location. | 2017-06-15 |
20170168887 | Long-Running Storage Manageability Operation Management - Serving resources. A method includes receiving from a client, a request for one or more operations to be performed. The method further includes attempting to perform the one or more operations. The method further includes determining that the one or more operations are not complete at a present time. As a result, the method further includes sending a message to the client indicating that the client should attempt to obtain status information for the one or more operations at a predetermined later time. The method further includes receiving a request from the client for status information about the one or more operations. The method further includes repeating sending a message to the client and receiving a request from the client for status information. | 2017-06-15 |
20170168888 | RESOLVING CONFLICTS BETWEEN MULTIPLE SOFTWARE AND HARDWARE PROCESSES - Embodiments include method, systems and computer program products for prioritizing delivery of messages across multiple communication systems. Aspects include that a conflict resolution system is configured to identify a plurality of processes. The conflict resolution system is further configured to generate a plurality of conflict rules corresponding to the plurality of processes. Based on the at least one selected process, the conflict resolution system can identify a conflict corresponding to at least one selected process of the plurality of processes in a conflict medium. In the exemplary embodiment, the conflict resolution system, applies at least one selected conflict rule of the plurality of conflict rules corresponding to the conflict, the at least one selected process, and the conflict medium. Based on the at least one selected conflict rule, the conflict resolution system modifies the at least one selected process. | 2017-06-15 |
20170168889 | SEMICONDUCTOR DEVICE, FUNCTIONAL SAFETY SYSTEM AND PROGRAM - A semiconductor device includes a bitwise operation unit and a storage control unit. The bitwise operation unit performs a bitwise operation on first n-bit (n is an integer) data that is storage object data and second data of an n-bit bit pattern and generates third data of a bit pattern that the number of “1s” and the number of “0s” are almost the same as each other. The storage control unit stores the third data into a first storage destination of a storage unit and stores fourth data that is the third data or data that is converted into the third data by performing a bitwise operation that has been predetermined in advance on the data into a second storage destination of the storage unit. | 2017-06-15 |
20170168890 | ELECTRONIC SYSTEM WITH MEMORY DATA PROTECTION MECHANISM AND METHOD OF OPERATION THEREOF - An electronic system includes: a host processor; a system memory, coupled to the host processor, includes data persistence regions identified by the host processor; a non-volatile storage device, including a fast path write (FPW) reserved area, configured to store user data from the system memory in a non-volatile media; and a power monitor unit, coupled to the host processor, configured to detect a power loss by a primary power failure detector and assert a power-loss detection control; and wherein the host processor is configured to engage a RAM flush driver for moving the content of the data persistence regions to a fast path write (FPW) reserved area in the non-volatile media when the power-loss detection control is asserted. | 2017-06-15 |
20170168891 | OPERATION METHOD OF NONVOLATILE MEMORY SYSTEM - An operation method of a nonvolatile memory system is provided. The method includes selecting a source block of memory blocks, performing a cell-counting with respect to the selected source block based on a reference voltage, and performing a reclaim operation on the source block based on the cell-counting result. | 2017-06-15 |
20170168892 | CONTROLLER FOR SEMICONDUCTOR MEMORY DEVICE AND OPERATING METHOD THEREOF - A controller includes a command generation unit suitable for generating a first read command for at least one page selected from said plurality of pages, an error correction block suitable for performing a first error correction operation to one or more code words stored in said at least one selected page in response to the first read command, and a command mirroring unit suitable for generating a mirrored command by mirroring the first read command. | 2017-06-15 |
20170168893 | DATA READING METHOD, MEMORY CONTROL CIRCUIT UNIT AND MEMORY STORAGE APPARATUS - A data reading method for a rewritable non-volatile memory module is provided. The method includes performing an error correction decoding operation on an user data stream according to an error checking and correcting (ECC) code to generate a first decoded data stream; searching uncorrectable sub-data units from decoded sub-data units of the first decoded data stream; selecting a target sub-data unit from the uncorrectable sub-data units; adjusting the target sub-data unit in the first decoded data stream to generate an adjusted user data stream; and re-performing the error correction decoding operation on the adjusted user data stream to generate a second decoded data stream; if the second decoded data stream has no error bit, transmitting the second decoded data stream as a corrected data stream to a host system. | 2017-06-15 |
20170168894 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device includes a first decoder suitable for performing first ECC decoding operation; a second decoder suitable for performing second ECC decoding operation; and a control unit suitable for controlling the first decoder to perform the first ECC decoding operation to data chunks read from a memory region respectively according to read voltage sets, and performing one of prioritization, reservation and omission of the second ECC decoding operation to a current data chunk when the first ECC decoding operation to the current data chunk fails. | 2017-06-15 |
20170168895 | QUEUING OF DECODING TASKS ACCORDING TO PRIORITY IN NAND FLASH CONTROLLER - Apparatus, for performing decoding tasks in a NAND Flash memory controller, includes a first task queue for queuing decoding tasks of a first priority, a second task queue for queuing decoding tasks of a second priority higher than the first priority, and control circuitry that, on receipt of portions of data for a plurality of decoding tasks, releases, from the first and second task queues, respective decoding tasks to operate on respective portions of data, according to priorities of the decoding tasks. First and second decoders operate under first and second decoding schemes that differ in speed or complexity. Input switching circuitry controllably connects each data channel to the first or second decoder. Decoder-done control circuitry selects output of the first or second decoder upon receipt of a decoder-done signal from the first or second decoder. Completed decoding tasks are queued in first and second task-done queues according to priority. | 2017-06-15 |
20170168896 | RAID-6 FOR STORAGE SYSTEM EMPLOYING A HOT SPARE DRIVE - A disclosed method for implementing a RAID-6 virtual disk includes performing data storing operations in response to receiving write data. The data storing operations include, in at least one embodiment: storing a block of the write data in D data stripes distributed across D of N storage devices, where D and N are integers greater than 0 and N is greater than D. The storage devices may correspond to disk drives, but may correspond to other types of storage devices as well | 2017-06-15 |
20170168897 | DISTRIBUTED CODING FOR MULTIPLE DIMENSIONAL PARITIES - A method for distributed coding in a storage array is presented. The method includes dividing data into multiple stripes for storage in a storage array including storage devices with a topology of a hypercube of a dimension t≧3. The storage devices in same hypercubes of dimension t−1 including the hypercube of a dimension t have even parity. Global parities are added to the hypercube such that a minimum distance of a code is enhanced. | 2017-06-15 |
20170168898 | STREAMING ENGINE WITH ERROR DETECTION, CORRECTION AND RESTART - This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence including plural nested loops is specified by a control register. The streaming engine includes an address generator producing addresses of data elements and a steam head register storing data elements next to be supplied as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer. Parity bits are formed upon storage of data in the stream buffer which are stored with the corresponding data. Upon transfer to the stream head register a second parity is calculated and compared with the stored parity. The streaming engine signals a parity fault if the parities do not match. The streaming engine preferably restarts fetching the data stream at the data element generating a parity fault. | 2017-06-15 |
20170168899 | EXTERNAL HEALING MODE FOR A DISPERSED STORAGE NETWORK MEMORY - A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. Based on a detected storage error, the computing device is configured to determine availability status of encoded data slices (EDSs) within a set of EDSs. When at least a threshold number of EDSs are available, the computing device is configured to initiate a rebuilding function to abate the detected storage error. When less than the threshold number of EDSs are available, the computing device is configured to initiate a slice repair function to at least one storage unit (SU) to abate the detected storage error. | 2017-06-15 |
20170168900 | USING DECLARATIVE CONFIGURATION DATA TO RESOLVE ERRORS IN CLOUD OPERATION - Aspects extend to methods, systems, and computer program products for using declarative configuration data to resolve errors in cloud operation. A tool (e.g., a maintenance module) and a design model can be used for bootstrapping a cloud stack that enables an external media based deployment model. The deployment model allows provisioning of an entire cloud stack as well as reset of or recovery from a failure of an existing cloud deployment instance. In one aspect, a bootstrap command for a cloud, a recovery command for the cloud, and a reset command for the cloud are consolidated within declarative configuration data. The tool (e.g., the maintenance module) can refer to the declarative configuration data to implement any of the bootstrap command, the recovery command, or the reset command. | 2017-06-15 |
20170168901 | SERVER BACKUP METHOD AND BACKUP SYSTEM USING THE METHOD - A server backup method and a backup system using the server backup method are provided. The server backup method includes continuously collecting a plurality of dirty pages during a running operation and determining a backup start time point according to a quantity of the collected dirty pages. The server backup method also includes suspending the running operation according to the backup start time point and executing a backup snapshot operation to generate a data backup snapshot corresponding to the dirty pages, and executing a backup transmission operation to transmit the data backup snapshot. | 2017-06-15 |
20170168902 | PROCESSOR STATE INTEGRITY PROTECTION USING HASH VERIFICATION - This disclosure is directed to processor state integrity protection using hash verification. A device may comprise processing circuitry and memory circuitry. The processing circuity may be triggered to enter a secure mode. Prior to entering the secure mode, the processing circuitry may determine a processor state of the processing circuitry and a hash of the processor state, and store them in secured memory within the memory circuitry. Prior to exiting the secure mode, the processing circuitry may compute an updated hash of the stored processor state and compare it to the previously stored hash. If the updated hash and stored hash are determined to be the same, then the processing circuitry may restore the processor state and normal operation resumes. If the updated hash and stored hash are determined to be different, then the stored processor state may be compromised and the processing circuitry may perform at least one protective action. | 2017-06-15 |
20170168903 | LIVE SYNCHRONIZATION AND MANAGEMENT OF VIRTUAL MACHINES ACROSS COMPUTING AND VIRTUALIZATION PLATFORMS AND USING LIVE SYNCHRONIZATION TO SUPPORT DISASTER RECOVERY - An illustrative “Live Synchronization” feature in a data storage management system can reduce the downtime that arises in failover situations. The illustrative Live Sync embodiment uses backup data to create and maintain a ready (or “warm”) virtualized computing platform comprising one or more virtual machines (“VMs”) that are configured and ready to be activated and take over data processing from another data processing platform operating in the production environment. The “warm” computing platform awaits activation as a failover solution for the production system(s) and can be co-located at the production data center, or configured at a remote or disaster recovery site, which in some embodiments is configured “in the cloud.” Both local and remote illustrative embodiments are discussed herein. An “incremental forever” approach can be combined with deduplication and synthetic full backups to speed up data transfer and update the disaster recovery sites. | 2017-06-15 |
20170168904 | TAIL OF LOGS IN PERSISTENT MAIN MEMORY - A system that uses a persistent main memory to preserve events that await logging in a persistent store. Each event is written into the persistent main memory so as to be loggable in case of recovery. For instance, the event may be written into a log cache structure, along with other state which identifies that the event is in the particular log cache structure, the location of the event within the particular log cache structure, and the order of the event. To recover, the log in the persistent store is evaluated to identify the end of the stored log. The tail of the log is identified in the persistent main memory by identifying any log cache structures that are after the end of the stored log and which are validly recoverable. The log cache structure contents are then serialized one log cache at a time, earliest first. | 2017-06-15 |
20170168905 | PROVIDING FAULT TOLERANCE IN A VIRTUALIZED COMPUTING ENVIRONMENT THROUGH A SWAPPING APPROACH - An example method is described to provide fault tolerance in a virtualized computing environment with a first fault domain and a second fault domain. The method may comprise determining whether a first primary virtualized computing instance and a first secondary virtualized computing instance are both in the first fault domain. The method may comprise: in response to determination that the first primary virtualized computing instance and first secondary virtualized computing instance are both in the first fault domain, selecting a second secondary virtualized computing instance from the second fault domain; migrating the first secondary virtualized computing instance from a first host to a second host; and migrating the second secondary virtualized computing instance from the second host to the first host, thereby swapping the first secondary virtualized computing instance in the first fault domain with the second secondary virtualized computing instance in the second fault domain. | 2017-06-15 |
20170168906 | PROVIDING FAULT TOLERANCE IN A VIRTUALIZED COMPUTING ENVIRONMENT THROUGH A MIGRATION APPROACH BASED ON RESOURCE AVAILABILITY - An example method is described to provide fault tolerance in a virtualized computing environment with a first fault domain and a second fault domain. The method may comprise determining whether a primary virtualized computing instance and a secondary virtualized computing instance are both in the first fault domain. The secondary virtualized computing instance may be configured as a backup for the primary virtualized computing instance and supported by a first host. The method may further comprise: in response to determination that the primary virtualized computing instance and secondary virtualized computing instance are both in the first fault domain, selecting, from the second fault domain, a second host based on a resource availability of the second host; and migrating the secondary virtualized computing instance from the first host to the second host, thereby migrating the secondary virtualized computing instance from the first fault domain to the second fault domain. | 2017-06-15 |
20170168907 | Service Level Agreement-Based Resource Allocation for Failure Recovery - Allocating resources during failure recovery is provided. A set of one or more service level agreement tiers are identified corresponding to a client workload that was being processed by a failed computing environment. A highest level tier is selected in the set of one or more service level agreement tiers. Recovery resources are allocated in a failover computing environment to the highest level tier sufficient to meet a service level agreement associated with the highest level tier. The highest level tier is recovered in the set of one or more service level agreement tiers using the recovery resources in the failover computing environment. In response to recovering the highest level tier, tier resources of the highest level tier are reduced to a steady state level of processing in the failover computing environment. | 2017-06-15 |
20170168908 | STORING DATA IN MULTI-REGION STORAGE DEVICES - An apparatus comprises a storage controller coupled to at least one multi-region storage device. The at least one multi-region storage device comprises two or more storage regions, the two or more storage regions comprising a first storage region associated with a first set of failure characteristics and at least a second storage region associated with a second set of failure characteristics different than the first set of failure characteristics. The storage controller is configured to replicate in the second storage region at least a portion of data that is stored in the first storage region. | 2017-06-15 |
20170168909 | CONTROLLING DEVICE, MANAGING DEVICE, STORAGE SYSTEM, CONTROL METHOD, MANAGEMENT METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A controlling device: receives state information indicating the state of a first storage region and the state of a second storage region for mirroring the first storage region; detects an error of an input and output process executed on the first storage region; executes, in response to the error, a first process if the first storage region is in a read-write mode, the first process including determining the states of the first and second storage regions, and selecting, based on the determined states, either one of executing the input and output process on the second storage region or stopping the input and output process executed on the first and second storage regions; and executes, in response to the error, a second process if the first storage region is in a read-only mode, the second process including executing the input and output process on the second storage region. | 2017-06-15 |
20170168910 | MULTICHIP DEBUGGING METHOD AND MULTICHIP SYSTEM ADOPTING THE SAME - Provided are a multichip debugging method and a multichip system adopting the same. The multichip system includes: a first chip including a first debugging port and first identification (ID) information, a second chip including a second debugging port and second ID information, and a test access port (TAP) electrically connected to the first debugging port and the second debugging port and configured to connect to a test apparatus via the TAP. | 2017-06-15 |
20170168911 | COMPUTER-IMPLEMENTED METHOD, INFORMATION PROCESSING DEVICE, AND RECORDING MEDIUM - A computer-implemented method includes: acquiring learning data from the plurality of processing devices in which a setting item, a setting value that include a setting error included in configuration information acquired when a fault in a system has occurred, and a fault type are associated with each other; determining whether each of fault types included in the learning data depends on a software configuration; extracting first software configuration information indicating a combination of setting files in which settings related to software are described, from the configuration information, based on a result of the determining; extracting second software configuration information indicating a combination of setting files in which settings related to software are described, from configuration information of a detection target; and determining whether to output an indication of a fault occurrence within the detection target by comparing the second software configuration information with the first software configuration information. | 2017-06-15 |
20170168912 | TESTING FRAMEWORK FOR CONTROL DEVICES - The present disclosure generally relates to the automated testing of a system that includes software or hardware components. In some embodiments, a testing framework generates a set of test cases for a system under test using a grammar. Each test case may perform an action, such as provide an input to the system under test, and result in an output from the system under test. The inputs and outputs are then compared to the expected results to determine whether the system under test is performing correctly. Specifically, the system under test may be analyzed to determine whether it is capable of properly processing control instructions and input signals and/or generating expected output control signals and additional control/feedback information. The data can then be interpreted in the grammar system and/or used as input to a fault isolation engine to determine anomalies in the system under test. | 2017-06-15 |
20170168913 | METHOD AND SYSTEM FOR TESTING OF APPLICATIONS IN ASSET MANAGEMENT SOFTWARE - A method for testing asset management software applications includes: interfacing a user computing device with a computing server, the server being configured to execute an asset management application program; displaying, on the user computing device, a user interface configured to enable a user to perform functions associated with the asset management application program; recording a plurality of user input actions using the interface; generating a program script configured to, upon execution by the computing server, automate performance of each of the recorded plurality of user input actions; executing at least one instance of the generated program script; and measuring one or more performance metrics associated with performance of the computing server during execution of the at least one instance of the generated program script. | 2017-06-15 |
20170168914 | RULE-BASED ADAPTIVE MONITORING OF APPLICATION PERFORMANCE - A method for dynamically and adaptively monitoring a system based on its running behavior adjusts monitoring levels of the monitored application in real-time. A rules-based mechanism dynamically adjusts monitoring levels in real-time, based on the system's performance observed during a workload run, whether in a production or test environment. | 2017-06-15 |
20170168915 | DYNAMIC TRACE LEVEL CONTROL - A method for adjusting a filtering mechanism within a trace logging system. The method may include receiving a plurality of messages from a software program, whereby each of the plurality of messages includes a message logging level. The method may also include storing the plurality of received messages in a buffer. The method may further include determining an error has occurred within the software program. The method may also include identifying each of the plurality of stored messages that aid in debugging the determined error. The method may further include updating an alert status configuration based on the message logging level associated with each of the plurality of identified messages. | 2017-06-15 |
20170168916 | OBJECT MONITORING IN CODE DEBUGGING - According to example embodiments of the present invention, an object to be monitored is determined, the object being associated with a variable in a code snippet including a plurality of statements. The object is monitored in execution of the plurality of statements. If a plurality of updates of the object are detected in the execution of the plurality of statements, a snapshot associated with each of the updates of the object is created. The snapshot includes a current value of the object and a memory address for the current value of the object. | 2017-06-15 |
20170168917 | DYNAMIC TRACE LEVEL CONTROL - A method for adjusting a filtering mechanism within a trace logging system. The method may include receiving a plurality of messages from a software program, whereby each of the plurality of messages includes a message logging level. The method may also include storing the plurality of received messages in a buffer. The method may further include determining an error has occurred within the software program. The method may also include identifying each of the plurality of stored messages that aid in debugging the determined error. The method may further include updating an alert status configuration based on the message logging level associated with each of the plurality of identified messages. | 2017-06-15 |
20170168918 | Sandboxing for Custom Logic - A method of extending the functionality of an enterprise software suite is disclosed. A request is received from a client system to modify a programming object on a productive system deployed in the cloud environment. A logical unit of programming objects is identified on the productive system, the logical unit including the programming object. Copies of each of the programming objects in the logical unit of programming objects are created in a sandbox of a combined development and test system deployed in the cloud environment, the copies including a copy of the programming object. The copy of the programming object is modified in the sandbox. A result of the modifying of the copy of the programming object in the sandbox is communicated for presentation in a client system without modifying, the programming object on the productive system. | 2017-06-15 |
20170168919 | FEATURE SWITCHES FOR PRIVATE CLOUD AND ON-PREMISE APPLICATION COMPONENTS - A set of features is received. A feature from the set of features includes a feature setting. The feature setting is adjusted based on a user input. A source code portion that corresponds to the adjusted feature setting is transported to a test system. The source code portion is implemented at the test system and evaluated based on the adjusted feature setting. Log data from the test system is analyzed in a feature evaluation UI. The evaluated feature with the implemented source code portion is submitted for deployment from the test system to a production system. | 2017-06-15 |
20170168920 | TRANSFER OF PAYLOAD DATA - A transfer of payload data from a buffer to a destination data store is provided so that the data can be processed there by a computer-assisted development environment. To this end, a data management environment provides the buffer and the destination data store. A data record having the payload data and semantic data that are associated with the payload data is provided in the buffer, and a data object with processing-specific object semantics is provided in the destination data store. The data object is instantiated with the payload data by means of the semantic data in that the payload data are placed in the data object as a function of the object semantics of the data object in such a manner that the development environment can process the payload data on the basis of the object semantics. | 2017-06-15 |
20170168921 | RISK BASED PROFILES FOR DEVELOPMENT OPERATIONS - A method, computer program product, and system for risk monitoring of continuous software delivery include a first plurality of test data. The first plurality of test data is associated with one or more software components. In response to receiving a changelog, a change in the received plurality of test data is determined. A risk profile for the one or more software components is generated, in response to receiving the first plurality of test data and the received changelog. A component code graph is generated, based on the risk profile associated with the one or more software components and a risk value associated with the generated risk profile is calculated, based on the component code graph. | 2017-06-15 |
20170168922 | BUILDING COVERAGE METRICS AND TESTING STRATEGIES FOR MOBILE TESTING VIA VIEW ENUMERATION - A method and method for testing an application includes performing a static analysis of metadata of coding of an application, using a test application program executed by a processor on a computer. Available user interface states are simulated based on the static analysis. A configuration file of the application is accessed and parsed to enumerate states possible for the application. A coverage metric is calculated for the application based on a number of states reached by the simulating and a number of states possible. | 2017-06-15 |
20170168923 | SYSTEM AND METHOD FOR CREATING A TEST APPLICATION - System and method for creating a test application are disclosed. The application includes a test application. Input data is received from a user at a development platform. The input data corresponds to a web application designed by a user. The user provides customization options through the input data. The user may customize the design of the web application during development. An automated script is utilized to fetch the input data from a server and modify the input data into a predefined format. Executable files are created from the input data in a predefined format according to the preferences of the user. The executable files are created by a scripting language. The executable files are uploaded at the server and are run for generating testing application. The user may download (install) the testing app from the server. | 2017-06-15 |
20170168924 | METHOD AND SYSTEM FOR WEB-SITE TESTING - The current document is directed to methods and systems for testing web sites. In certain implementations of the methods and systems, a testing service collects customer page-access and conversion information on behalf of a web site. The testing service is straightforwardly accessed and configured, through a web-site-based user interface, and is virtually incorporated into the web site by simple HTML-file modifications. A more efficient web-site-testing system nonuniformly distributes web-site accesses among web-page variants in order to more quickly and computationally efficiently determine a most effective web-page variant among a set of tested web-page variants. In certain implementations, nonuniform distribution of web-site accesses among web-page variants is facilitated by a Bayesian-inference method. | 2017-06-15 |
20170168925 | VIRTUAL STORAGE ADDRESS THRESHOLD FOR FREEMAINED FRAMES - Address-based thresholds for freemained frames are used to determine retention actions. Based, at least in part, on a comparison of a number of freemained frames for an address space against a threshold of freemained frames for the address space, freemained frames can be retained or rejected and/or the threshold can be adjusted. | 2017-06-15 |
20170168926 | VIRTUAL STORAGE ADDRESS THRESHOLD FOR FREEMAINED FRAMES - Address-based thresholds for freemained frames are used to determine retention actions. Based, at least in part, on a comparison of a number of freemained frames for an address space against a threshold of freemained frames for the address space, freemained frames can be retained or rejected and/or the threshold can be adjusted. | 2017-06-15 |
20170168927 | METHOD AND APPARATUS FOR LOADING A RESOURCE IN A WEB PAGE ON A DEVICE - The present disclosure discloses a method and an apparatus for loading a resource in a web page on a device as well as a computer-readable storage medium. Wherein, the method comprises: determining whether a current available memory level of the device is normal or low; loading the resource in the web page according to the current available memory level; wherein, if the current available memory level is low, loading the resource in the web page according to the current available memory level further comprises: loading a specified resource tailored from the resource in the web page. According to the embodiments of the present disclosure, lots of memory may be saved and the loading speed may be improved. Therefore the browser resided in the device may be prevented from being broken, and the user experiences may be improved. | 2017-06-15 |
20170168928 | SYSTEM AND METHOD FOR EFFICIENT ADDRESS TRANSLATION OF FLASH MEMORY DEVICE - Disclosed are a system and a method for address translation for a flash memory device, and particularly, disclosed is a technology that is capable of efficiently performing address translation between a logical address provided to the outside of a flash memory and a physical address of an actual flash memory in managing the flash memory device. The system includes: a flash memory system writing a corresponding data page by allocating a physical address space when there is a request for writing a data page from storage clients, and performing address translation between a physical address and a logical address; and a logical address space formed between the flash memory system and the storage client to provide the logical address. | 2017-06-15 |
20170168929 | MEMORY SYSTEM AND METHOD FOR CONTROLLING NONVOLATILE MEMORY - According to one embodiment, a memory system includes a nonvolatile memory including a plurality of blocks and a controller. The controller manages a garbage collection count for each of blocks containing data written by a host, the garbage collection count indicating the number of times the data in said each of the blocks has been copied by a garbage collection operation of the nonvolatile memory. The controller selects, as garbage collection target blocks, first blocks associated with a same garbage collection count. The controller copies valid data in the first blocks to a copy destination free block. The controller sets, as a garbage collection count of the copy destination free block, a value obtained by adding one to a garbage collection count of the first blocks. | 2017-06-15 |
20170168930 | METHOD FOR OPERATING STORAGE CONTROLLER AND METHOD FOR OPERATING STORAGE DEVICE INCLUDING THE SAME - A method of operating a storage controller, for controlling a garbage collection operation so that blocks included in a non-volatile memory satisfy reuse constraints, includes determining whether the number of free blocks among the blocks is smaller than a first reference value for triggering a garbage collection operation and performing the garbage collection operation on the blocks until the number of free blocks is equal to a second reference value larger than the first reference value according to a result of the determination. | 2017-06-15 |
20170168931 | NONVOLATILE MEMORY MODULE, COMPUTING SYSTEM HAVING THE SAME, AND OPERATING METHOD THEROF - A nonvolatile memory module includes at least one nonvolatile memory, at least one nonvolatile memory controller configured to control the nonvolatile memory, at least one dynamic random access memory (DRAM) used as a cache of the at least one nonvolatile memory, data buffers configured to store data exchanged between the at least one DRAM and an external device, and a memory module control device configured to control the nonvolatile memory controller, the at least one DRAM, and the data buffers. The at least one DRAM stores a tag corresponding to cache data and compares the stored tag with input tag information to determine whether to output the cache data. | 2017-06-15 |
20170168932 | Secure Garbage Collection on a Mobile Device - Methods and systems for performing garbage collection involving sensitive information on a mobile device are described herein. Secure information is received at a mobile device over a wireless network. The sensitive information is extracted from the secure information. A software program operating on the mobile device uses an object to access the sensitive information. Secure garbage collection is performed upon the object after the object becomes unreachable. | 2017-06-15 |
20170168933 | Resilient Distributed Garbage Collection - In a distributed processing system having multiple processing nodes including alive nodes and dead nodes, a method is provided for collecting an object from the alive nodes. The method includes maintaining a separate count value for each of remote nodes at which the object is remotely-referenced. The method further includes suppressing a collection of the object when the separate count value for any of the remote nodes is non-zero. The method also includes clearing the separate count value for a given one of the remote nodes when the given one of the remote nodes is dead. | 2017-06-15 |
20170168934 | MEMORY CONTROLLER WITH INTERLEAVING AND ARBITRATION SCHEME - A memory controller that implements an interleaving and arbitration scheme includes an address decoder that selects a memory bank for an access request based on a set of address least significant bits included in the access request. A core requiring sequential access to memory is routed to consecutive memory banks of the memory for consecutive access requests. When multiple cores request access to the same memory bank, an arbiter determines an access sequence for the cores. The arbiter can modify the access sequence without significantly increasing the complexity of the memory controller. The address decoder determines whether the selected memory banks are available and also whether an access request is a wide access request, in which case it selects two consecutive memory banks. | 2017-06-15 |
20170168935 | Method and Device of Memory Space Management - A virtual memory is partitioned into virtual partitions, each partition being subdivided into virtual sub-partitions and each sub-partition corresponding to a combination of multiple sectors of identical or different sizes of a physical memory. When an allocation request is made for a virtual memory space having a given memory size, a free partition is selected, a virtual sub-partition is selected corresponding to a combination of sectors having a minimum total size covering the given memory size of the virtual memory to be allocated, and free sectors of the physical memory are selected corresponding to the selected combination. A determination is made of a correspondence table between the selected virtual partition and the initial physical addresses of the selected free sectors, and a virtual address is generated. | 2017-06-15 |
20170168936 | SERVER-BASED PERSISTENCE MANAGEMENT IN USER SPACE - A persistence management system performs, at a server, operations associated with a number of applications. At the server, a persistence manager can intercept a file system call from one of the applications, wherein the file system call specifies a file located on a remote persistent storage device separate from the server. The persistence manager can determine that data belonging to the file requested by the file system call is stored on a local persistent storage device at the server, retrieve the data from the local persistent storage, and respond to the file system call from the application with the data. | 2017-06-15 |
20170168937 | COMMITTING TRANSACTION WITHOUT FIRST FLUSHING PROCESSOR CACHE TO NON-VOLATILE MEMORY WHEN CONNECTED TO UPS - A computing system includes a processor that has a processor cache built-in, and a non-volatile memory, such as a non-volatile dual-inline memory module (NVDIMM), which is being used as system memory within the computing system. The processor processes a transaction. If the computing system is connected to an uninterruptible power supply (UPS) (and the UPS is connected to a mains power source that is currently providing power), the transaction is committed without first flushing the processor cache to the non-volatile memory. If the computing system is not connected to a UPS (and is connected to a mains power source that is currently providing power), the transaction is not committed until the processor cache has been flushed to the non-volatile memory. | 2017-06-15 |
20170168938 | ITERATOR REGISTER FOR STRUCTURED MEMORY - Loading data from a computer memory system is disclosed. A memory system is provided, wherein some or all data stored in the memory system is organized as one or more pointer-linked data structures. One or more iterator registers are provided. A first pointer chain is loaded, having two or more pointers leading to a first element of a selected pointer-linked data structure to a selected iterator register. A second pointer chain is loaded, having two or more pointers leading to a second element of the selected pointer-linked data structure to the selected iterator register. The loading of the second pointer chain reuses portions of the first pointer chain that are common with the second pointer chain. | 2017-06-15 |
20170168939 | SNOOP FILTER FOR CACHE COHERENCY IN A DATA PROCESSING SYSTEM - A data processing system, having two or more of processors that access a shared data resource, and method of operation thereof. Data stored in a local cache is marked as being in a ‘UniqueDirty’, ‘SharedDirty’, ‘UniqueClean’, ‘SharedClean’ or ‘Invalid’ state. A snoop filter monitors access by the processors to the shared data resource, and includes snoop filter control logic and a snoop filter cache configured to maintain cache coherency. The snoop filter cache does not identify any local cache that stores the block of data in a ‘SharedDirty’ state, resulting in a smaller snoop filter cache size and simple snoop control logic. The data processing system by be defined by instructions of a Hardware Description Language. | 2017-06-15 |
20170168940 | METHODS OF OVERRIDING A RESOURCE RETRY - In an embodiment, an apparatus includes control circuitry and a memory configured to store a plurality of access instructions. The control circuitry is configured to determine an availability of a resource associated with a given access instruction of the plurality of access instructions. The associated resource is included in a plurality of resources. The control circuitry is also configured to determine a priority level of the given access instruction in response to a determination that the associated resource is unavailable. The control circuit is further configured to add the given access instruction to a subset of the plurality of access instructions in response to a determination that the priority level is greater than a respective priority level of each access instruction in the subset. The control circuit is also configured to remove the given access instruction from the subset in response to a determination that the associated resource is available. | 2017-06-15 |
20170168941 | POWER SAVING FOR REVERSE DIRECTORY - Embodiments include systems and methods for improving power consumption characteristics of reverse directories in microprocessors. Some embodiments operate in context of multiprocessor semiconductors having cache hierarchies in which multiple higher-level caches share lower-level caches. Lower-level cache is coupled with reverse directories associated with respective ones of the higher-level caches. Each reverse directory can be segregated into two reverse sub-directories, one reverse sub-directory for relatively high-frequency accesses (e.g., updating “valid” and/or “private” information), and the other reverse sub-directories for relatively low-frequency accesses updating “index” and “way” information). During a write mode operation, when the reverse directories are updated, the write operation is performed only on the sub-directories having the entries invoked by the update, such that write operations can frequently consume only a fraction (e.g., halt) of the power of a conventional reverse directory write operation. | 2017-06-15 |
20170168942 | TECHNOLOGIES FOR MANAGING CACHE MEMORY IN A DISTRIBUTED SHARED MEMORY COMPUTE SYSTEM - Technologies for managing cache memory of a processor in a distributed shared memory system includes managing a distance value and an age value associated with each cache line of the cache memory. The distance value is indicative of a distance of a memory resource, relative to the processor, from which data stored in the corresponding chance line originates. The age value is based on the distance value and the number of times for which the corresponding cache line has been considered for eviction since a previous eviction of the corresponding cache line. Initially, the age value is set to the distance value. Additionally, every time a cache line is accessed, the age value associated with the accessed cache line is reset to the corresponding distance value. During a cache eviction operation, the cache line for eviction is selected based on the age value associated with each cache line. The age values of cache lines not selected for eviction are subsequently decremented such that even cache lines associated with remote memory resources will eventually be considered for eviction if not recently accessed. | 2017-06-15 |
20170168943 | COMPONENT CARRIER WITH CONVERTER BOARD - A component carrier with a housing and a converter board disposed within the housing. The converter board including a U.2 connector, an M.2 connector configured to receive an M.2 solid state drive having a cache memory, and a capacitor. The capacitor provides backup power for a power loss protection system allowing flush cache storage. The housing configured to receive one or more M.2 solid state drives coupled with the converter board. | 2017-06-15 |
20170168944 | BLOCK CACHE EVICTION - Several embodiments include a method of operating a cache appliance comprising a primary memory implementing an item-wise cache and a secondary memory implementing a block cache. The cache appliance can track at least a block-specific access statistic associated a target block in the block cache. The block-specific access statistic can be stored in the primary memory. The cache appliance can detect an eviction condition that triggers the caching system to evict at least one block from the block cache; and selecting an eviction candidate block to evict by comparing the block-specific access statistic of the target block against one or more block-specific access statistics of one or more other blocks. | 2017-06-15 |
20170168945 | HANDLING UNALIGNED LOAD OPERATIONS IN A MULTI-SLICE COMPUTER PROCESSOR - Handling unaligned load operations, including: receiving a request to load data stored within a range of addresses; determining that the range of addresses includes addresses associated with a plurality of caches, wherein each of the plurality of caches are associated with a distinct processor slice; issuing, to each distinct processor slice, a request to load data stored within a cache associated with the distinct processor slice, wherein the request to load data stored within the cache associated with the distinct processor slice includes a portion of the range of addresses; executing, by each distinct processor slice, the request to load data stored within the cache associated with the distinct processor slice; and receiving, over a plurality of data communications busses, execution results from each distinct processor slice, wherein each data communications busses is associated with one of the distinct processor slices. | 2017-06-15 |
20170168946 | STRIDE REFERENCE PREFETCHER - A processor including a cache memory, processing logic, access logic, stride mask logic, count logic, arbitration logic, and a prefetcher. The processing logic submits load requests to access cache lines of a memory page. The access logic updates an access vector for the memory page, in which the access logic determines a minimum stride value between successive load requests. The stride mask logic provides a mask vector based on the minimum stride value. The count logic combines the mask vector with the access vector to provide an access count. The arbitration logic triggers a prefetch operation when the access count achieves a predetermined count threshold. The prefetcher performs the prefetch operation using a prefetch address determined by combining the minimum stride value with an address of a last one of the load requests. Direction of the stride may be determined, and a stable mode is described. | 2017-06-15 |
20170168947 | METHOD OF PREDICTING A DATUM TO BE PRELOADED INTO A CACHE MEMORY - A datum to be preloaded includes the acquisition of a, so-called “model”, statistical distribution of the deltas of a model access sequence, the construction of a, so-called “observed”, statistical distribution of the deltas of an observed access sequence, the identification in the observed statistical distribution, by comparing it with the model statistical distribution, of the most deficient class, that is to say of the class for which the difference NoDSM−NoDSO is maximal, where NoDSM and NoDSO are the numbers of occurrences of this class that are deduced, respectively, from the model statistical distribution and from the observed statistical distribution, the provision as prediction of the datum to be preloaded into the cache memory, of at least one predicted address where the datum to be preloaded is contained, this predicted address being constructed on the basis of the most deficient class identified. | 2017-06-15 |
20170168948 | SIZING CACHE DATA STRUCTURES USING FRACTAL ORGANIZATION OF AN ORDERED SEQUENCE - A cache is sized using an ordered data structure having data elements that represent different target locations of input-output operations (IOs), and are sorted according to an access recency parameter. The cache sizing method includes continually updating the ordered data structure to arrange the data elements in the order of the access recency parameter as new IOs are issued, and setting a size of the cache based on the access recency parameters of the data elements in the ordered data structure. The ordered data structure includes a plurality of ranked ring buffers, each having a pointer that indicates a start position of the ring buffer. The updating of the ordered data structure in response to a new IO includes updating one position in at least one ring buffer and at least one pointer. | 2017-06-15 |
20170168949 | Migration of Data to Register File Cache - Methods and migration units for use in out-of-order processors for migrating data to register file caches associated with functional units of the processor to satisfy register read operations. The migration unit receives register read operations to be executed for a particular functional unit. The migration unit reviews entries in a register renaming table to determine if the particular functional unit has recently accessed the source register and thus is likely to comprise an entry for the source register in its register file cache. In particular, the register renaming table comprises entries for physical registers that indicate what functional units have accessed the physical register. If the particular functional unit has not accessed the particular physical register the migration unit migrates data to the register file cache associated with the particular functional unit. | 2017-06-15 |
20170168950 | TECHNIQUES FOR STORING DATA AND TAGS IN DIFFERENT MEMORY ARRAYS - A memory controller includes logic circuitry to generate a first data address identifying a location in a first external memory array for storing first data, a first tag address identifying a location in a second external memory array for storing a first tag, a second data address identifying a location in the second external memory array for storing second data, and a second tag address identifying a location in the first external memory array for storing a second tag. The memory controller includes an interface that transfers the first data address and the first tag address for a first set of memory operations in the first and the second external memory arrays. The interface transfers the second data address and the second tag address for a second set of memory operations in the first and the second external memory arrays. | 2017-06-15 |
20170168951 | MEMORY SYSTEM AND METHOD FOR CONTROLLING NONVOLATILE MEMORY - According to one embodiment, a memory system includes a nonvolatile memory, and a controller electrically connected to the nonvolatile memory. The controller receives, from a host, a write command including a logical block address. The controller obtains a total amount of data written to the nonvolatile memory by the host during a time ranging from a last write to the logical block address to a current write to the logical block address, or time data associated with a time elapsing from the last write to the logical block address to the current write to the logical block address. The controller notifies the host of the total amount of data or the time data as a response to the received write command. | 2017-06-15 |
20170168952 | FILE ACCESS METHOD AND APPARATUS, AND STORAGE SYSTEM - A file access method and apparatus, and a storage system are provided. After receiving a file access request from a process, a first physical address space is accessed according to a preset first virtual address space and a preset first mapping relationship between the first virtual address space and the first physical address space, where the first physical address space stores a file system. After obtaining an index node of a target file from the first physical address space according to a file identifier of the target file carried in the file access request, a file page table of the target file is obtained according to file page table information. The file page table records a second physical address space in the first physical address space. The target file is accessed according to the second physical address space. | 2017-06-15 |
20170168953 | FILE ACCESS METHOD AND APPARATUS, AND STORAGE SYSTEM - A file access method and apparatus, and a storage system are provided. After receiving a file access request including a file identifier, first physical address space is accessed according to first virtual address space and a first mapping relationship between the first virtual address space and the first physical address space storing a file system. After obtaining, from the first physical address space, an index node of an object file indicated by the file identifier, a file page table is obtained according to information included in the index node, where the file page table records second physical address space of the object file. Then, second virtual address space is allocated to the object file. After establishing a second mapping relationship between the second physical address space and the second virtual address space, the object file in the second physical address space is accessed according to the second virtual address space. | 2017-06-15 |
20170168954 | SYSTEM ADDRESS MAP FOR HASHING WITHIN A CHIP AND BETWEEN CHIPS - A system and method for accessing on-chip and off-chip memory in an integrated circuit data processing system. The system includes a number of nodes connected by an interconnect and also includes system address map logic in which a node register table is accessed using a hash function of the memory address to be accessed. A node identifier stored in a register of the node register table is an identifier of a remote-connection node when the memory address is in off-chip memory addresses and an identifier of a local-connection node when the memory address is in the off-chip memory. Transaction requests are routed using the node identifier selected using the hash function. | 2017-06-15 |
20170168955 | EFFICIENT ADDRESS-TO-SYMBOL TRANSLATION OF STACK TRACES IN SOFTWARE PROGRAMS - The disclosed embodiments provide a system for processing data. During operation, the system obtains an attribute of a stack trace of a software program. Next, the system uses the attribute to select an address-translation instance from a set of address-translation instances for processing the stack trace. The system then provides the stack trace to the selected address-translation instance for use in translating a set of memory addresses in the stack trace into a set of symbols of instructions stored at the memory addresses. | 2017-06-15 |
20170168956 | BLOCK CACHE STAGING IN CONTENT DELIVERY NETWORK CACHING SYSTEM - Several embodiments include a method of operating a cache appliance comprising a primary memory and a secondary memory. The primary memory can implement an item-wise cache and the secondary memory can implement a block cache. The cache appliance can record an access history of a data item in the item-wise cache. The cache appliance can determine, by evaluating the access history of the data item, whether to store the data item in the block cache. | 2017-06-15 |
20170168957 | Aware Cache Replacement Policy - An aware cache replacement policy increases the length of in-page bursts of cache eviction memory requests and promotes bank-rotation to reduce the likelihood of memory bank-conflicts as compared to other cache replacement policies. The aware cache replacement policy increases the amount of valid data on the memory bus and reduces the impact of main memory precharge and activate times by evicting cache blocks in bursts based on temporal and spatial locality according to requesting thread and/or memory structure. | 2017-06-15 |
20170168958 | ITEM-WISE SIMULATION IN A BLOCK CACHE - Several embodiments include a method of operating a cache appliance comprising a primary memory implementing an item-wise cache and a secondary memory implementing a block cache. The cache appliance can emulate item-wise storage and eviction in the block cache by maintaining, in the primary memory, sampling data items from the block cache. The sampled items can enable the cache appliance to represent a spectrum of retention priorities. When storing a pending data item into the block cache, a comparison of the pending data item with the sampled items can enable the cache appliance to identify where to insert a block containing the pending data item. When evicting a block from the block cache, a comparison of a data item in the block with at least one of the sampled items can enable the cache appliance to determine whether to recycle/retain the data item. | 2017-06-15 |
20170168959 | MANAGING AND ORGANIZING WEB BROWSER CACHE - Disclosed are systems and methods for managing a browser cache. An example method comprises storing in a browser cache on a user device information of web pages visited by a user during one or more web browsing sessions; determining logical relationships among the web pages stored in the cache; associating the web pages with one or more clusters based on the determined logical relationships; upon detecting a usage size of the cache equal to or exceeding a threshold value, identifying information associated with the one or more clusters in the cache; determining a web page or a cluster of web pages to be deleted from the cache based on the identified information; and deleting from the cache one or more web pages based on the identified information associated with each of the one or more clusters. | 2017-06-15 |
20170168962 | PROGRAMMABLE INTELLIGENT SEARCH MEMORY ENABLED SECURE FLASH MEMORY - Systems comprising a processor and a dynamic random access memory (DRAM). The DRAM comprises a programmable intelligent search memory (PRISM). | 2017-06-15 |
20170168963 | PROTECTION KEY MANAGEMENT AND PREFIXING IN VIRTUAL ADDRESS SPACE LEGACY EMULATION SYSTEM - A system is described to provide protection key access control in a system whose operating system and processor were not designed to provide a protection key memory access control mechanism. Such a system can be applied to an emulator or to enable a system that executes native applications to be interoperable with a legacy system that employs protection key memory access control. | 2017-06-15 |
20170168964 | HARD DRIVE DISK INDICATOR PROCESSING APPARATUS - A hard drive disk indicator processing apparatus includes first and second processors. The first processor includes first, second and third communication interfaces. The first communication interface receives at least one serial general purpose input/output signal from a motherboard. The second communication interface receives a plurality piece of hard drive disk status information for responding to a plurality of hard drive disk statuses of hard drive disks. The third communication interface outputs serial information. The second processor includes fourth and fifth communication interfaces. The fourth communication interface is coupled to the third communication interface and receives the serial information. The fifth communication interface is coupled to a plurality of hard drive disk indicators. The first processor generates the serial information according to the at least one serial general purpose input/output signal. The second processor controls an on/off status of each of the hard drive disk indicators according to the serial information. | 2017-06-15 |
20170168965 | ELECTRONIC APPARATUS HAVING INTERFACE TO WIRELESSLY COMMUNICATE WITH INTERNAL WIRELESS DEVICE DISPOSED IN A HOUSING OF THE ELECTRONIC APPARATUS AND A METHOD THEREOF - An electronic apparatus includes an interface unit disposed in the housing to wirelessly communicate with a wireless device disposed inside the housing, an internal wireless device being the wireless device inside the housing, having a sub-housing a sub-housing containing a circuit board, a semiconductor chip unit mounted on the circuit board, and an internal wireless interface unit mounted on the circuit board and electrically connected to the semiconductor chip unit, and a controlling/processing unit disposed in the housing configured to control the interface unit to wirelessly communicate with the internal wireless interface unit of the internal wireless device and to transmit or receive data between the interface unit and the internal wireless device when the internal wireless device exists in the housing. | 2017-06-15 |
20170168966 | OPTIMAL LATENCY PACKETIZER FINITE STATE MACHINE FOR MESSAGING AND INPUT/OUTPUT TRANSFER INTERFACES - Systems, methods, and apparatus for communication virtualized general-purpose input/output (GPIO) signals over a serial communication link A method performed at a transmitting device coupled to a communication link includes encoding virtual GPIO signals or messages into a data packet, determining a maximum latency requirement for transmitting the data packet over the communication link, providing a command code header indicating a packet type to be used for transmitting the data packet over the communication link, and transmitting the command code header and the data packet over the communication link in a packet selected to satisfy the maximum latency requirement. A protocol for transmitting the data packet may be determined based on the maximum latency requirement and one or more attributes of protocols available for use on the communication link. In one example, the communication link includes a serial bus and the available protocols include I2C, I3C, and/or RFFE protocols. | 2017-06-15 |
20170168967 | DIGITAL AGGREGATION OF INTERRUPTS FROM PERIPHERAL DEVICES - A host integrated circuit is provided with an interrupt aggregator having a signal terminal for coupling to the signal end of an R- | 2017-06-15 |
20170168968 | AUDIO BUS INTERRUPTS - Audio bus interrupts are disclosed. In one aspect, a new command (referred to herein as a Slave Interrupt Status command) is provided using a reserved Opcode within the SOUNDWIRE protocol. In response to a Ping Request by a slave, a master generates a PING command. The slave that generated the Ping Request sets a bit in a Ping Response according to the existing SOUNDWIRE protocol. However, instead of iteratively reading from each slave, the master uses the Slave Interrupt Status command to interrogate the requesting slave more thoroughly. In response to the Slave Interrupt Status command, the slave provides a more robust response that indicates interrupt requesting status of all registers within the slave that could generate an interrupt. Thus, the master is provided a complete list of which registers generate the original Ping Request and can act accordingly to address issues that generate the interrupt. | 2017-06-15 |
20170168969 | RECONFIGURABLE TRANSMITTER - Described is a reconfigurable transmitter which includes: a first pad; a second pad; a first single-ended driver coupled to the first pad; a second single-ended driver to the second pad; a differential driver coupled to the first and second pads; and a logic unit to enable of the first and second single-ended drivers, or to enable the differential driver. | 2017-06-15 |
20170168970 | POLICY-DRIVEN STORAGE IN A MICROSERVER COMPUTING ENVIRONMENT - An example method for facilitating policy-driven storage in a microserver computing environment is provided and includes receiving, at an input/output (I/O) adapter in a microserver chassis having a plurality of compute nodes and a shared storage resource, policy contexts prescribing storage access parameters of respective compute nodes and enforcing the respective policy contexts on I/O operations by the compute nodes, in which respect a particular I/O operation by any compute node is not executed if the respective policy context does not allow the particular I/O operation. The method further includes allocating tokens to command descriptors associated with I/O operations for accessing the shared storage resource, identifying a violation of any policy context of any compute node based on availability of the tokens, and throttling I/O operations by other compute nodes until the violation disappears. | 2017-06-15 |