14th week of 2019 patent applcation highlights part 35 |
Patent application number | Title | Published |
20190102232 | ADAPTIVE, PERFORMANCE-ORIENTED, AND COMPRESSION-ASSISTED ENCRYPTION SCHEME - An approach for an adaptive, performance-oriented, and compression-assisted encryption scheme implemented on a host computer to adaptively improve utilization of CPU resources is provided. The method comprises queueing a new data packet and determining a size of the new data packet. Based on historical data, a plurality of already encrypted data packets is determined. Based on information stored for the plurality of already encrypted data packets, an average ratio of compression for the plurality of already encrypted data packets is determined. Based on the average ratio of compression, a throughput of compression value and a throughput of encryption value, a prediction whether compressing the new data packet will reduce a CPU load is derived. If it is determined that compressing the new data packet will improve utilization of the CPU resources, then a compressed new data packet is generated by compressing the new data packet. | 2019-04-04 |
20190102233 | METHOD FOR POWER OPTIMIZATION IN VIRTUALIZED ENVIRONMENTS AND SYSTEM IMPLEMENTING THE SAME - A power optimization system and method for virtualized environments at least comprising a domain layer on which a plurality of virtual machines are implemented, a hardware layer and hypervisor layer configured for abstracting between the virtual machines of the domain layer and the hardware layer, wherein the system comprises a hardware interface to set a limit on the power consumption of at least one processing means implemented in a hardware layer and a software structure for performing an optimization of the available resource allocations for the running workload in terms of power consumption, wherein the software structure is an Observe-Decide-Act control loop structure, comprising an observe stage, a decide stage and an act stage, and wherein the observe stage interfaces with means configured for reading performance values inside at least one model specific register of the at least one processing means. | 2019-04-04 |
20190102234 | Executing runtime programmable applications - According to an example aspect of the present invention, there is provided system comprising a first apparatus configured to execute a first runtime programmable application, and a second apparatus configured to execute a second runtime programmable application, wherein the first runtime programmable application program comprises one or more references to variables of the second apparatus, wherein the first apparatus and the second apparatus are configured to communicate one or more variables defined by the references over a network connection and to set at least one of a variable of the first apparatus and a variable of the second apparatus on the basis of communicated runtime variables. | 2019-04-04 |
20190102235 | Notifications - In an embodiment, a server may support notifications using an underlying channel-based messaging scheme. A client may register for one or more notifications from a server, or may poll the server for notifications, using messages on the channel. The notification events may be transmitted on another to the client. The flexibility of the notification system may permit distributed systems to effectively manage their notification events, in some embodiments. | 2019-04-04 |
20190102236 | OVERLAPPED RENDEZVOUS MEMORY REGISTRATION - Methods, software, and systems for improved data transfer operations using overlapped rendezvous memory registration. Techniques are disclosed for transferring data between a first process operating as a sender and a second process operating as a receiver. The sender sends a PUT request message to the receiver including payload data stored in a send buffer and first and second match indicia. Subsequent to or in conjunction with sending the PUT request message, the send buffer is exposed on the sender. The first match indicia is used to determine whether the PUT request is expected or unexpected. If the PUT request is unexpected, an RMA GET operation is performed using the second matching indicia to pull data from the send buffer and write the data to a memory region in the user space of the process associated with the receiver. The RMA GET operation may be retried one or more times in the event that the send buffer has yet to be exposed. If the PUT request message is expected, the data payload with the PUT request is written to a receive buffer on the receiver determined using the first match indicia. The techniques included implementations using the Portals APIs and Message Passing Interface (MPI) applications and provide an improved rendezvous protocol. | 2019-04-04 |
20190102237 | RECOMMENDING APPLICATIONS BASED ON CALL REQUESTS BETWEEN APPLICATIONS - Recommending applications based on call requests between applications is disclosed, including: receiving a plurality of sets of application call request recordings from respective ones of a plurality of client devices; using the plurality of sets of application call request recordings to generate association relationships between a first application and one or more other applications; determining a set of application recommendation information determined based at least in part on the association relationships between the first application and the one or more other applications; and sending the set of application recommendation information to a recipient client device. | 2019-04-04 |
20190102238 | API REGISTRY IN A CONTAINER PLATFORM PROVIDING PROPERTY-BASED API FUNCTIONALITY - A method of customizing deployment and operation of services in container environments may include receiving, at an API registry, a property for a service that is or will be encapsulated in a container that is or will be deployed in a container environment. The method may also include determining whether the property for the service affects the deployment of the service to the container environment, and in response to a determination that the property affects the deployment of the service, deploying the service based at least in part on the property. The method may additionally include determining whether the property for the service affects the generation of a client library that calls the service in the container environment, and in response to a determination that the property affects the generation of the client library, generating the client library based at least in part on the property. | 2019-04-04 |
20190102239 | API REGISTRY IN A CONTAINER PLATFORM FOR AUTOMATICALLY GENERATING CLIENT CODE LIBRARIES - A method of providing Application Programming Interface (API) functions for registered service endpoints in container environments may include receiving, at an API registry, an API definition that may include an endpoint of a first service that is encapsulated in a container that is deployed in a container environment and one or more API functions. The method may also include creating, by the API registry, a binding between the one or more API functions and the endpoint of the service; receiving, by the API registry, a request from a second service to use the first service; and providing, by the API registry, the one or more API functions to the second service. | 2019-04-04 |
20190102240 | PLATO ANOMALY DETECTION - A method for continuous data anomaly detection includes identifying a period of time covered by metrics data stored in a repository. The stored metrics data is categorized into a plurality of non-overlapping time segments. Statistical analysis of the stored metrics data is performed based on the identified period of time. A range of acceptable metric values is dynamically generated based on the performed statistical analysis. | 2019-04-04 |
20190102241 | SLICE METADATA FOR OPTIMIZED DSN MEMORY STORAGE STRATEGIES - A method begins by a dispersed storage (DS) processing unit of a dispersed storage network (DSN) generating a hint regarding data stored or to be stored. When the data is to be stored, the DS processing module divides the data into data segments and dispersed storage error encodes a data segment of the data segments to produce a set of encoded data slices. The method continues by the DS processing unit generating a set of hints based on the hint and affiliating the set of hints with the set of encoded data slices to produce a set of affiliated encoded data slices. The method continues by the DS processing unit sending the set of affiliated encoded data slices to a set of storage units of the DSN such that a storage unit of the set of storage units stores an encoded data slice in accordance with a corresponding hint. | 2019-04-04 |
20190102242 | SYSTEM AND METHODS FOR HARDWARE-SOFTWARE COOPERATIVE PIPELINE ERROR DETECTION - A family of software-hardware cooperative mechanisms to accelerate intra-thread duplication leverage the register file error detection hardware to implicitly check the data from duplicate instructions, avoiding the overheads of instruction checking and enforcing low-latency error detection with strict error containment guarantees. | 2019-04-04 |
20190102243 | SYSTEM ERROR CODES FOR EDGE ENCRYPTION - Embodiments are disclosed herein that provide users of a cloud computing system with the ability to determine, display, prioritize, and/or handle error messages, e.g., using a system-wide standardized naming format. In some embodiments, the appropriate system-wide standardized error messages may be determined, even in situations where at least some of the data underlying the error is encrypted and remains unknown to the hosted cloud computing system. The system-wide standardized error messages may include, e.g., an indication of a company's name, an application name, as well as a unique error code. The standardized error message may also include information as to how the error may potentially be remediated. Using these embodiments, users may be able to more quickly understand which errors to address first and what possible solutions may be employed in order to resolve those errors—while remaining confident that any encrypted information has remained uncompromised. | 2019-04-04 |
20190102244 | METHOD AND SYSTEM FOR PREDICTING FAILURE EVENTS - Embodiments described herein provide a predictive failure analysis that enables design-time error and exception handling techniques to be supplemented or assisted by a predictive failure analysis system. One embodiment provides an electronic device, comprising a non-transitory machine-readable medium to store instructions; one or more processors to execute the instructions; and a memory coupled to the one or more processors, the memory to store the instructions which, when executed by the one or more processors, cause the one or more processors to receive injection of dynamic error detection logic into the instructions, the dynamic error handling logic including an error handling update to indicate a response to a predicted failure; receive a set of events indicative of the predicted failure; and respond to the set of events according to the error handling update. | 2019-04-04 |
20190102245 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device includes a nonvolatile memory device; and a controller configured to include a plurality of cores, wherein, when an error occurs in at least one core among the cores, a first core which is coupled with the nonvolatile memory device transmits state records of one or more core among the cores at an error occurrence time, to the nonvolatile memory device. | 2019-04-04 |
20190102246 | MEMORY CONTROLLER ERROR CHECKING PROCESS USING INTERNAL MEMORY DEVICE CODES - An apparatus is described. The apparatus includes a memory controller to receive data from a memory device. The memory controller includes error checking logic circuitry. The error checking logic circuitry is to receive an error checking code from the memory device. The error checking code is generated within the memory device from the data. The error checking logic circuitry includes circuitry to generate a second version of the error checking code from the data that was received from the memory device and compare the received error checking code with the second version of the error checking code to understand if the data that was received from the memory controller is corrupted. | 2019-04-04 |
20190102247 | CACHE BASED RECOVERY OF CORRUPTED OR MISSING DATA - Systems and methods for recovering corrupted data or missing data from a cache are provided. When a data corruption is discovered in a storage system, the cache may be searched to determine if a valid copy of the corrupted data can be recovered from the cache. | 2019-04-04 |
20190102248 | MITIGATING SILENT DATA CORRUPTION IN ERROR CONTROL CODING - One embodiment provides a silent data corruption (SDC) mitigation circuitry. The SDC mitigation circuitry includes a comparator circuitry and an SDC mitigation logic. The comparator circuitry is to compare a successful decoded codeword and a corresponding received codeword, the successful decoded codeword having been deemed a success by an error correction circuitry. The SDC mitigation logic is to reject the successful decoded codeword if a distance between the corresponding received codeword and the successful decoded codeword is greater than or equal to a threshold. | 2019-04-04 |
20190102249 | Redundancy Coding Stripe Based On Coordinated Internal Address Scheme Across Multiple Devices - A system and method pertains to operating non-volatile memory systems. Technology disclosed herein efficiently uses memory available in non-volatile storage devices in a non-volatile memory system. In some aspects, non-volatile storage devices enforce a redundancy coding stripe across the non-volatile storage devices formed from chunks of data having internal addresses assigned in a coordinated scheme across the storage devices. In some aspects, non-volatile storage devices enforce a redundancy coding stripe across the non-volatile storage devices at the same internal addresses in the respective non-volatile storage devices. | 2019-04-04 |
20190102250 | Redundancy Coding Stripe Based On Internal Addresses Of Storage Devices - Technology disclosed herein efficiently uses memory available in non-volatile storage devices in a non-volatile memory system. In one aspect, a manager collects enough data to fill an entire chunk of a redundancy coding stripe, and requests that the entire chunk be written together in a selected non-volatile storage device. The selected non-volatile storage device may return an internal address at which the entire chunk was written. The manager may store a stripe map that identifies the internal addresses at which each chunk was stored. | 2019-04-04 |
20190102251 | SYSTEMS AND METHODS FOR DETECTING AND CORRECTING MEMORY CORRUPTIONS IN SOFTWARE - Examples described herein generally relate to a computer device including a memory and at least one processor configured to execute a process and manage the memory for the process. The processor is configured to receive a registration from the process for notifications regarding errors in the memory. The processor is configured to create first metadata regarding content of a portion of the memory allocated to the process when a physical memory address associated with a virtual address for the portion of memory is made non-writable to the process. The processor is configured to detect an error in the memory by comparing second metadata for current contents of the portion of memory to the first metadata. The processor is configured to provide a notification to the process in response to detecting the error. In some implementations, the processor is configured to determine whether the error is correctable based on the metadata. | 2019-04-04 |
20190102252 | SCALABLE CLOUD - ASSIGNING SCORES TO REQUESTERS AND TREATING REQUESTS DIFFERENTLY BASED ON THOSE SCORES - A method begins by a computing device of a dispersed storage network (DSN) maintaining a queue of pending requests to access the DSN while new requests are added to the queue and executed requests are deleted from the queue. The method continues by the computing device determining, for each pending request in the queue, a prioritization score to produce a plurality of prioritization scores. The prioritization score is determined by determining an identity of a requestor associated with a pending request, obtaining a trust score based on the requestor's identity, and obtaining a compliance score based on the requestor's identity. The trust score indicates the requestor's level of legitimate use of the DSN and the compliance score indicates the requestor's level of compliance with DSN system requests. The method continues by the computing device executing pending requests of the queue in accordance with the plurality of prioritization scores. | 2019-04-04 |
20190102253 | TECHNIQUES FOR MANAGING PARITY INFORMATION FOR DATA STORED ON A STORAGE DEVICE - Disclosed herein are techniques for managing parity information for data stored on a storage device. According to some embodiments, the method includes the steps of (1) receiving a request to store data into the storage device, (2) writing respective portions of the data into a plurality of data pages included in a first stripe of the storage device, where each data page is stored on a respective different die of the storage device, (3) calculating primary parity information for the first stripe, (4) writing the primary parity information into a primary parity page included in a second stripe of the storage device, (5) calculating secondary parity information for the second stripe, and (6) writing the secondary parity information into a secondary parity page included in a third stripe of the storage device. Additionally, a copy of the secondary parity information can be established to further-enhance redundancy. | 2019-04-04 |
20190102254 | SECURING AGAINST ERRORS IN AN ERROR CORRECTING CODE (ECC) IMPLEMENTED IN AN AUTOMOTIVE SYSTEM - In general, data is susceptible to errors caused by faults in hardware (i.e. permanent faults), such as faults in the functioning of memory and/or communication channels. To detect errors in data caused by hardware faults, the error correcting code (ECC) was introduced, which essentially provides a sort of redundancy to the data that can be used to validate that the data is free from errors caused by hardware faults. In some cases, the ECC can also be used to correct errors in the data caused by hardware faults. However, the ECC itself is also susceptible to errors, including specifically errors caused by faults in the ECC logic. A method, computer readable medium, and system are thus provided for securing against errors in an ECC. | 2019-04-04 |
20190102255 | Cognitive Analysis and Resolution of Erroneous Software Patches - Resolving software patch issues is provided. Recorded activities performed by users to resolve an issue with a patch applied to an application on a group of client devices are compared. A set of common user activities are identified within the recorded activities performed by the users. A subset of highest ranking common user activities is selected from the set of common user activities. A fix for the issue with the patch is generated based on the subset of highest ranking common user activities. Corrective action based on the fix is taken to resolve the issue with the patch on a client device, the client device experiencing the issue resolved by users on the group of client devices. | 2019-04-04 |
20190102256 | INCREMENTAL VAULT TO OBJECT STORE - Systems and methods for managing incremental data backups on an object store. A computing device receives first data representing a changed chunk of data in a revision of a data volume on a storage device, the changed chunk includes data having changes from previous data of a previous revision. The computing device creates a block of data representing a copy of the changed chunk on the object store, the object store also includes a previous revision block representing previous revision data. The computing device determines a previous index stored on the object store corresponding to the previous revision, which includes entries including at least one corresponding to the previous revision block. The computing device creates a copy of at least one previous index from the object store, and a revised index that updates the corresponding entry with updated entry data representing the change block. | 2019-04-04 |
20190102257 | PARTIAL DATABASE RESTORATION - Described herein is a system that restores a database by processing a portion of the database. The system restores the database to a previous state at a particular time by reverting data entries that have changed since the time to their initial values before the change. Data entries that have changed after the restore time are identified. For the data entries that have changed after the restore time, their initial values before the change are determined from various sources. The system determines a database version that is created most recently before the restore time. The system additionally identifies changes to the database between the restore time and when the database version is created. The initial values can be determined from either the database version or the changes made to the database between the restore time and when the database version is created. | 2019-04-04 |
20190102258 | System and Method for Procedure for Point-in-Time Recovery of Cloud or Database Data and Records in Whole or in Part - A user interface, system and method are provided for the recovery and restoration of software records or elements thereof to earlier record or data iterations or versions in order to overcome or repair consequences of database corruption or data deletion. A source database and/or a current archive database further enable recording of records of the source database to an historical data archive, from which records or elements thereof may be recovered. A restore command is detectable by the system as directly input via a user interface and/or as sent via an electronics communications modality or network. The databases and archives may have access to multiple iterations/versions of a record including the original record version as stored in an historical archive or elsewhere in a network. The records may optionally be updated in a batch method, in real time, and/or as the software records are created. | 2019-04-04 |
20190102259 | LOGGING PROCESS IN A DATA STORAGE SYSTEM - A logging process in a data storage system having a set of storage tiers, each storage tier of the set of storage tiers having different performance characteristics, wherein the set of storage tiers is divided into a plurality of subsets of storage tiers using the performance characteristics, may include initiating the logging process for creating a separate log file for each of the plurality of subsets of storage tiers for maintaining a history of data changes in the subset of storage tiers, thereby creating a plurality of log files. In response to a change in data stored in at least one storage tier of a subset of storage tiers of the plurality of subsets of storage tiers, one or more log records including information about the change may be generated and written into respective log files. | 2019-04-04 |
20190102260 | FAILOVER SERVICE TO SUPPORT HIGH AVAILABILITY OF MONOLITHIC SOFTWARE APPLICATIONS - To eliminate additional development for monolithic applications, the high availability services are externalized from the application and performed by an agent executing alongside an application on a server or computing device. The agent is provided resources for verifying that an application is active and for controlling the application. The agent can use the provided resources to initialize a failover instance of the application as needed. Additionally, the agent can communicate and broadcast the status of its monitored application(s) to other agents through a shared database so that an agent on another server can initialize a failover instance of the application as needed. The agent can synchronize configuration files among the one or more instances of an application so that the application executes uniformly across all instances. The file synchronization is performed externally from the application and does not require additional development or modification of the existing monolithic application. | 2019-04-04 |
20190102261 | CIRCUIT AND METHOD FOR STORING INFORMATION IN NON-VOLATILE MEMORY DURING A LOSS OF POWER EVENT - A data storage circuit for storing data from volatile memory in response to a power loss, the data storage circuit including an input for receiving a power loss signal in response to a power loss from at least one power source, an input configured to receive data from a volatile memory, a single block of non-volatile matrix of memory cells and a driver circuit coupled to said single row of non-volatile matrix of memory cells. The driver circuit is configured to write data to and read data from said single block of non-volatile matrix of memory cells. The single block of non-volatile matrix of memory cells can be provided as a single row electrically erasable programmable read only memory (EEPROM). | 2019-04-04 |
20190102262 | AUTOMATED CONTINUOUS CHECKPOINTING - A storage controller performs continuous checkpointing. With continuous checkpointing, the information necessary for system rollback is continuously recorded without the need of a specific command. With the rollback information, the system can rollback or restore to any previous state up to a number of previous writes or up to an amount of data. The number of writes or the amount of data that can be restored are configurable. | 2019-04-04 |
20190102263 | SYSTEM AND METHOD FOR IMPLEMENTING DATA MANIPULATION LANGUAGE (DML) ON HADOOP - An embodiment of the present invention is directed to creating a re-usable code component that may be used with the data manipulation and transformation tool to natively support DML functionality. In addition to Insert, Update, and Delete, an addition function directed to “DeDup” may be implemented as it is used frequently in data transformation processes. An embodiment of the present invention is directed to capability to roll-back to a prior version of the original dataset. Any number of versions as required may be maintained. | 2019-04-04 |
20190102264 | DATA STORAGE SYSTEM COMPRISING PRIMARY AND SECONDARY STORAGE SYSTEMS - Data is stored on a primary storage system and a copy of the data is stored on a secondary storage system. A determination is made that a connection between the systems is currently unavailable. Location data is maintained that identifies where changes have been made to the primary storage system while the connection is unavailable. Another determination is made that data has been lost at the secondary storage system. Recovery data required to repair the lost data is identified. Another determination is made that the connection to the secondary storage system is now available. The location data is updated with the locations of the recovery data. The secondary storage system is updated with data from the primary storage system as defined by the location data. | 2019-04-04 |
20190102265 | HIGHLY AVAILABLE STATEFUL CONTAINERS IN A CLUSTER ENVIRONMENT - A system for stateful containers in a distributed computing environment that includes a server cluster having a plurality of computing nodes communicatively connected via a network. Each computing node within the server cluster includes one or more virtual hosts, one or more containers operating on top of each virtual host and an application instantiation, operating on top of a container, communicatively coupled to a persistent storage medium. Each virtual host instantiates, and is tied to, a unique virtual internet protocol address that is linked to the persistent storage medium on which resides the application state data. | 2019-04-04 |
20190102266 | FAULT-TOLERANT STREAM PROCESSING - Techniques for providing fault-tolerant stream processing. An exemplary technique includes writing primary output events to a primary target and secondary output events to one or more secondary targets, where the primary output events are written by a primary server and the secondary output events are written by one or more secondary servers. The technique further includes receiving an election of a new primary server from a synchronization system upon a failure of the primary server, where the new primary server is elected from the one or more secondary servers. The technique further includes determining, by the new primary server, the primary output events that failed to be written to the primary target because of the failure of the primary server, and writing, by the new primary server, the failed primary output events to the primary target using the secondary output events read from the one or more secondary targets. | 2019-04-04 |
20190102267 | SESSION TEMPLATES - Techniques are disclosed herein for identifying, recording and restoring the state of a database session and various aspects thereof. A session template data structure is generated that includes session attribute values describing various aspects of the session that is established between a client system and a database management system (DBMS and enables the client system to issue to the DBMS commands for execution. Based on the session attribute values, DBMS may generate a template identifier corresponding to the session template data structure. The template identifier may be stored in an association with the session state that it partially (or in whole) represents. In an embodiment, when another state of a session is captured, if the template identifier for the state is the same, then rather than storing the attribute-value pairs for the other state, the template identifier is further associated with the other state. In an embodiment, a request boundary is detected where the session is known to be at a recoverable point. If recovery of the session is needed, the session state is restored, and replay of commands start from this point. Each command replayed is verified to produce the same session state as it produced at original execution. If the session is determined to be a safe point, then all the commands recorded for replay prior to the safe point may be deleted. | 2019-04-04 |
20190102268 | SEMICONDUCTOR DEVICE - A semiconductor device includes a common resource commonly used by plural processes executed on a processor, a semaphore controlling the possessory right of the common resource, and a semaphore management unit performing a process of acquiring the possessory right of the common resource to the semaphore in response to a request of a process performed on the processor. When a request to acquire the possessory right of the common resource is received from a first process in the plural processes and the possessory right cannot be obtained, the semaphore management unit switches the process executed on the processor to a second process, repeatedly performs a process of acquiring the possessory right requested by the first process to the semaphore and, when the possessory right requested by the first process is obtained, switches the process on the processor from the second process to the first process. | 2019-04-04 |
20190102269 | Bidirectional Replication - An example data storage system includes a first storage array having a first LUN and a second storage array having a second LUN. The first and second storage arrays may implement replication from the first LUN as a primary LUN to the second LUN as a secondary LUN. The first and second LUNs may both be an active target for host write I/O. The second storage array may, in response to receiving from a host a write that is directed to the second LUN, send write data of the write to the first storage for replication array while maintaining a copy of the write data in a fenced portion of a cache of the second storage array. The second storage array may wait to release the copy of the write data to the second LUN until a write acknowledgment is received from the first storage array. | 2019-04-04 |
20190102270 | PRINT VERIFICATION SYSTEM THAT REPORTS DEFECTIVE PRINTHEADS - Systems and methods are provided for print verification that reports defective printheads. One embodiment is a Print Verification System (PVS) that includes an interface to receive print data, and an imaging device to obtain image data of printed output of the print data. The PVS also includes a processor to detect a print error on a page by comparing the print data and the image data. The processor determines a lateral distance of a location of the print error with respect to an edge of the page, identifies a print engine that printed the page, determines a lateral offset of the print engine with respect to the edge of the page, identifies a printhead among a plurality of printheads of the print engine that caused the print error based on the lateral distance of the print error and the lateral offset of the print engine. | 2019-04-04 |
20190102271 | SEMICONDUCTOR DEVICE - There is a need to detect faults on a path between a memory access circuit and a shared resource, faults in a logic circuit, and faults in the shared resource. A semiconductor device includes: a first memory access circuit; a second memory access circuit to check the first memory access circuit; a memory that outputs a memory address based on a first access address input from the first memory access circuit; a duplexing comparison circuit that compares the first access address with a second access address output from the second memory access circuit; a first address comparison circuit that compares the first access address with the memory address; and an error control circuit that outputs a control signal based on a comparison result from the duplexing comparison circuit and a comparison result from the first address comparison circuit. | 2019-04-04 |
20190102272 | APPARATUS AND METHOD FOR PREDICTING A REDUNDANCY PERIOD - An apparatus comprises a plurality of memory units organised as a hierarchical memory system, wherein each of at least some of the memory units is associated with a processor element; predictor circuitry to perform a prediction process to determine a predicted redundancy period of result data of a data processing operation to be performed, indicating a predicted point when said result data will be next accessed; and an operation controller to cause a selected processor element to perform said data processing operation, wherein said selected processor element is selected based on said predicted redundancy period. | 2019-04-04 |
20190102273 | RESOLVING APPLICATION MULTITASKING DEGRADATION - Systems and methods for resolving application multitasking degradation are disclosed. In aspects, a computer implemented method is used with a user device including a multitasking operating system, shared user device resources, a first application and a second application. The method includes: running, simultaneously, the first application and the second application; measuring performance parameters for one or more application tasks of the first and second applications; and determining that one or more of the performance parameters of the one or more application tasks falls below a performance threshold value of an associated key performance indicator (KPI). The determination indicates degradation in performance of at least one of the first application and second application. The method further includes instructing the operating system to modify an allocation of the shared user device resources to address the degradation in performance of the at least one of the first application and second application. | 2019-04-04 |
20190102274 | Utilization Metrics for Processing Engines - In an embodiment, a processor includes multiple processing engines and a power control unit. The power control unit is to: maintain a first utilization metric for a first processing engine; detect a thread transfer from a first processing engine to a second processing engine; and generate, using the first utilization metric for the first processing engine, a second utilization metric for a second processing engine. Other embodiments are described and claimed. | 2019-04-04 |
20190102275 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR MONITORING DATA ACTIVITY UTILIZING A SHARED DATA STORE - In accordance with embodiments, there are provided mechanisms and methods for monitoring data activity utilizing a shared data store. These mechanisms and methods for monitoring data activity utilizing a shared data store can enable enhanced data monitoring, more efficient data storage, improved system resource utilization, etc. | 2019-04-04 |
20190102276 | SYSTEMS AND METHODS FOR ROBUST ANOMALY DETECTION - A system, includes: a distributed cache that stores state information for a plurality of configuration items (CIs). Management, instrumentation, and discovery (MID) servers form a cluster, each of the MID servers including one or more processors that receive, from the distributed cache, a subset of the state information associated with assigned CIs and perform a statistical analysis on the subset of the state information. | 2019-04-04 |
20190102277 | CLASSIFYING WARNING MESSAGES GENERATED BY SOFTWARE DEVELOPER TOOLS - A method for classifying warning messages generated by software developer tools includes receiving a first data set. The first data set includes a first plurality of data entries, where each data entry is associated with a warning message generated based on a first set of software codes, includes indications for a plurality of features, and is associated with one of a plurality of class labels. A second data set is generated by sampling the first data set. Based on the second data set, at least one feature is selected from the plurality of features. A third data set is generated by filtering the second data set with the selected at least one feature. A machine learning classifier is determined based on the third data set. The machine learning classifier is used to classify a second warning message generated based on a second set of software codes to one of the plurality of class labels. | 2019-04-04 |
20190102278 | MEMORY LEAK PROFILING EVENTS - Techniques for profiling memory leaks are described. In one or more embodiments, a memory profiling system identifies a set of one or more objects on the heap during application runtime. For each respective object in the subset of objects, the memory profiling system stores a set of sample information including timestamp that identifies a time associated with an allocation on the heap memory was performed for the respective object and a stack trace identifying at least one subroutine that triggered the allocation on the heap memory. Responsive to detecting a memory leak, the memory profiling system generates a memory leak profile for at least one object in the subset of objects that is causing the memory leak. The memory leak profile identifies when the allocation on the memory store for the at least one object was performed and information about object that remained live after the potential memory leak. | 2019-04-04 |
20190102279 | GENERATING AN INSTRUMENTED SOFTWARE PACKAGE AND EXECUTING AN INSTANCE THEREOF - Techniques for generating an instrumented software package and executing an instance thereof are disclosed. A software package, such as a container image, includes a library of system call wrapper functions. An instrumented system call wrapper function includes (a) a corresponding system call wrapper function and (b) instrumentation code. Instrumentation code is configured to perform one or more of: (a) capturing data associated with executing the set of operations associated with requesting the system call, and (b) manipulating execution of the set of operations associated with requesting the system call. An instrumented library, including instrumented system call wrapper functions, is added to the software package to generate an instrumented software package. An instrumentation configuration is applied to an instance of the instrumented software package. The instrumentation configuration indicates which portions of instrumentation code to set to an “on state,” and which portions of instrumentation code to set to an “off state.” | 2019-04-04 |
20190102280 | REAL-TIME DEBUGGING INSTANCES IN A DEPLOYED CONTAINER PLATFORM - A method may include receiving a request for a service at a container environment. The container environment may include a service mesh and a plurality of services encapsulated in a plurality of containers. The service may be encapsulated in first one or more containers. The method may also include determining that the request should be routed to a debug instance of the service; and instantiating the debug instance of the service. The debug instance may be encapsulated in second one or more containers and may include code implementing the service and one or more debugging utilities. The method may additionally include routing, by the service mesh, the request to the debug instance. | 2019-04-04 |
20190102281 | MEASURING AND IMPROVING TEST COVERAGE - Embodiments of the invention include methods and systems for improving test case coverage. Aspects of the invention include executing, by a processor, a first test case, where the first test case includes a plurality of system calls to an operating system. Prior to execution of each system call in the plurality of system calls in the first test case, executing, by the processor, a pre-exit instruction. Responsive to execution of the pre-exit instruction, collecting pre-exit system call data regarding each system call in the plurality of system calls for the first test case. The processor executes a post-exit instruction after completion of each system call in the plurality of system calls and responsive to execution of the post-exit instruction, collects post-exit system call data regarding each system call in the plurality of system calls for the first test case. | 2019-04-04 |
20190102282 | MEASURING AND IMPROVING TEST COVERAGE - Embodiments of the invention include methods and systems for improving test case coverage. Aspects of the invention include executing, by a processor, a first test case, where the first test case includes a plurality of system calls to an operating system. Prior to execution of each system call in the plurality of system calls in the first test case, executing, by the processor, a pre-exit instruction. Responsive to execution of the pre-exit instruction, collecting pre-exit system call data regarding each system call in the plurality of system calls for the first test case. The processor executes a post-exit instruction after completion of each system call in the plurality of system calls and responsive to execution of the post-exit instruction, collects post-exit system call data regarding each system call in the plurality of system calls for the first test case. | 2019-04-04 |
20190102283 | NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, GENERATION METHOD, AND INFORMATION PROCESSING APPARATUS - A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process including executing one of a plurality of programs, acquiring a status of variation in an internal state of a memory occurred in response to the executing, determining whether a specified status pattern is stored in a storage device that stores a plurality of status patterns of variation in an internal state of the memory, the specified status pattern satisfying a predetermined criterion regarding a similarity with the acquired status, when the specified status pattern is stored in the storage device, generating a test scenario that is a combination of programs including the executed program, and when the specified status pattern is not stored in the storage device, suppressing the generating the test scenario. | 2019-04-04 |
20190102284 | TESTING PRE AND POST SYSTEM CALL EXITS - Embodiments of the invention include systems for testing pre and post system call exits. Aspects include executing a first test case comprises system calls and the first test case initializes a common buffer and stores system call parameters for each of the system calls. A monitoring test case is executed comprising: a pre-exit instruction that is inserted before each system call in the first test case. A post-exit instruction is inserted after each of the system calls in the first test case. Execution of the pre-exit instruction is determined prior to an execution of each system call. A first bit location is set in the common buffer to one, based on determining the pre-exit instruction executes. The system call is executed and execution of the post-exit instruction is determined. A second bit location in the common buffer is set to one based on determining that the post-exit instruction executes. | 2019-04-04 |
20190102285 | TESTING PRE AND POST SYSTEM CALL EXITS - Embodiments of the invention include systems for testing pre and post system call exits. Aspects include executing a first test case comprises system calls and the first test case initializes a common buffer and stores system call parameters for each of the system calls. A monitoring test case is executed comprising: a pre-exit instruction that is inserted before each system call in the first test case. A post-exit instruction is inserted after each of the system calls in the first test case. Execution of the pre-exit instruction is determined prior to an execution of each system call. A first bit location is set in the common buffer to one, based on determining the pre-exit instruction executes. The system call is executed and execution of the post-exit instruction is determined. A second bit location in the common buffer is set to one based on determining that the post-exit instruction executes. | 2019-04-04 |
20190102286 | VISUALIZATION OF VULNERABILITIES DETECTED BY STATIC APPLICATION TESTING - Vulnerability testing of applications may include one or more of identifying a number of paths from a software application being tested, identifying a number of nodes associated with the paths, determining one or more of the paths which share one or more of the nodes, designating the paths which share the nodes as overlapping paths, and displaying the overlapping paths and the shared nodes as an interactive visualization to identify to identify optimal locations to fix one or more vulnerability findings. | 2019-04-04 |
20190102287 | REMOTE PERSISTENT MEMORY ACCESS DEVICE - Systems, apparatuses and methods may provide for technology that detects received data at a network interface controller. The network interface controller may be connected with a local memory including a persistent non-volatile memory directly accessible only by the network interface controller. The technology may determine whether to store the received data in the local memory or a system memory including a persistent non-volatile memory region. The technology may store the received data in the local memory or the system memory according to the determining. | 2019-04-04 |
20190102288 | CONTROL MODULES, MULTI-LEVEL DATA STORAGE DEVICES, MULTI-LEVEL DATA STORAGE METHODS, AND COMPUTER READABLE MEDIA - A control module for a multi-level data storage device having a plurality of memory devices is disclosed. The control module may include: an access determination circuit configured to determine that access has been made to a piece of data stored on at least one of the plurality of memory devices, the piece of data associated with a level being one of a first level, a second level, or a third level; a level management circuit configured to change the level from the third level to the second level or from the second level to the first level upon determining that access has been made to the piece of data; and a memory controller configured to promote the piece of data in response to whether the level is the first level, the second level or the third level, wherein at least two levels of the first level, the second level, and the third level are associated with one of the plurality of memory devices. | 2019-04-04 |
20190102289 | Apparatus and Method of Damage Recovery for Storage Class Memory - A method and apparatus of wear leveling control for storage class memory are disclosed. According to the present invention, whether current data to be written to a nonvolatile memory corresponds to a write cache hit is determined. If the current data to be written corresponds to the write cache hit, the current data are written to a write cache as well as to a designated location in the nonvolatile memory different from a destined location in the nonvolatile memory. If the current data to be written corresponds to a write cache miss, the current data are written to the destined location in the nonvolatile memory. If the current data to be written corresponds to the write cache miss and the write cache is not full, the current data is also written to the write cache. In another embodiment, the wear leveling control technique also includes address rotation process to achieve long-term wear leveling as well. | 2019-04-04 |
20190102290 | Meta Data Arrangement for Wear Leveling of Storage Class Memory - A method and apparatus of wear leveling control for storage class memory are disclosed. According to the present invention, whether current data to be written to a nonvolatile memory corresponds to a write cache hit is determined. If the current data to be written corresponds to the write cache hit, the current data are written to a write cache as well as to a designated location in the nonvolatile memory different from a destined location in the nonvolatile memory. If the current data to be written corresponds to a write cache miss, the current data are written to the destined location in the nonvolatile memory. If the current data to be written corresponds to the write cache miss and the write cache is not full, the current data is also written to the write cache. In another embodiment, the wear leveling control technique also includes address rotation process to achieve long-term wear leveling as well. | 2019-04-04 |
20190102291 | DATA STORAGE DEVICE AND METHOD FOR OPERATING NON-VOLATILE MEMORY - Device-based space allocation and host-based mapping table searching are disclosed for operating a non-volatile memory. In response to a write command from a host that indicates a write logical address, a controller at the device end determines a write physical address and allocates the non-volatile memory to provide a space in the write physical address to store write data. The controller transmits the write physical address to the host and thereby the host establishes a mapping table on the host. The mapping table records the mapping relationship between the write logical address and the write physical address. | 2019-04-04 |
20190102292 | COHERENT MEMORY DEVICES OVER PCIe - There is disclosed in an example a peripheral component interconnect express (PCIe) controller to provide coherent memory mapping between an accelerator memory and a host memory address space, having: a PCIe controller hub including extensions to provide a coherent accelerator interconnect (CAI) to provide bias-based coherency tracking between the accelerator memory and the host memory address space; wherein the extensions include: a mapping engine to provide opcode mapping between PCIe instructions and on-chip system fabric (OSF) instructions for the CAI; and a tunneling engine to provide scalable memory interconnect (SMI) tunneling of host memory operations to the accelerator memory via the CAI. | 2019-04-04 |
20190102293 | STORAGE SYSTEM WITH INTERCONNECTED SOLID STATE DISKS - An embodiment of a semiconductor package apparatus may include technology to provide a first interface between a first storage device and a host device, and provide a second interface directly between the first storage device and a second storage device. Other embodiments are disclosed and claimed. | 2019-04-04 |
20190102294 | SEMICONDUCTOR DEVICE - A semiconductor device includes a decoder configured to receive an extended mode register set (EMRS) code including specific information, and decode the received EMRS code to acquire the specific information; a peripheral controller configured to generate a control signal based on the specific information; and a peripheral region including a plurality of buffers, the plurality of buffers being configured to be controlled by the control signal, wherein the specific information includes information indicating an expected bandwidth of input data that is to be input to one of the plurality of buffers. | 2019-04-04 |
20190102295 | METHOD AND APPARATUS FOR ADAPTIVELY SELECTING DATA TRANSFER PROCESSES FOR SINGLE-PRODUCER-SINGLE-CONSUMER AND WIDELY SHARED CACHE LINES - A method for adaptively performing a set of data transfer processes in a multi-core processor is described. The method may include receiving, by a shared cache from a first core cache, a first request for a cache line; determining, by the shared cache in response to receipt of the first request, whether the cache line is a widely-shared cache line or a single-producer-single-consumer cache line; and performing, by the first core cache and a second core cache, a three-hop data transfer process in response to determining that the cache line is a single-producer-single-consumer cache line, wherein the three-hop data transfer process transfers the cache line directly from the second core cache to the first core cache. | 2019-04-04 |
20190102296 | DATA PRESERVATION AND RECOVERY IN A MEMORY COMPONENT - In one embodiment, a nonvolatile memory of a component such as a storage drive preserves write data in the event of a write data programming failure in the memory. Write data is preserved in the event of cached writes by data preservation logic in registers and data recovery logic recovers the preserved data and outputs the recovered data from the storage drive. Other aspects are described herein. | 2019-04-04 |
20190102297 | SYSTEM AND METHOD FOR BROADCAST CACHE INVALIDATION - One embodiment includes a system comprising a repository configured to store objects, an object cache configured to cache objects retrieved from the repository by a node, a memory configured to store a broadcast cache invalidation queue accessible by a plurality of nodes and an invalidation status, a processor coupled to the memory and a computer readable medium storing computer-executable instructions. The computer-executable instructions can be executable to store cache invalidations in the invalidation queue, the cache invalidations identifying objects affected by operations, access the invalidation status to determine a last processed invalidation from the invalidation queue, determine a set of unprocessed invalidations from the cache invalidation queue, the unprocessed invalidations subsequent to the last processed invalidation, clear cached objects from the object cache based on the set of unprocessed invalidations and update the invalidation status based on a last invalidation from the set of unprocessed invalidations. | 2019-04-04 |
20190102298 | VARIABLE MODULATION SCHEME FOR MEMORY DEVICE ACCESS OR OPERATION - Methods, systems, and devices that support variable modulation schemes for memory are described. A device may switch between different modulation schemes for communication based on one or more operating parameters associated with the device or a component of the device. The modulation schemes may involve amplitude modulation in which different levels of a signal represent different data values. For instance, the device may use a first modulation scheme that represents data using two levels and a second modulation scheme that represents data using four levels. In one example, the device may switch from the first modulation scheme to the second modulation scheme when bandwidth demand is high, and the device may switch from the second modulation scheme to the first modulation scheme when power conservation is in demand. The device may also, based on the operating parameter, change the frequency of the signal pulses communicated using the modulation schemes. | 2019-04-04 |
20190102299 | SYSTEMS, METHODS AND APPARATUS FOR FABRIC DELTA MERGE OPERATIONS TO ENHANCE NVMEOF STREAM WRITES - A method and apparatus for performing a data transfer, which include a selection a data transfer operation mode, based on telemetry data, from a first operation mode where a first type of data is transferred from a memory of a computing system to one or more shared storage devices, and a second operation mode where a second type of data is transferred from the memory to the one or more shared storage devices, the first type of data being associated with a first range of address space of the one or more shared storage devices, the second type of data being associated with a second range of address space of the one or more shared storage devices different from the first range of address space. Furthermore, a data transfer from the memory to the one or more shared storage devices in the selected data transfer operation mode may be included. | 2019-04-04 |
20190102300 | APPARATUS AND METHOD FOR MULTI-LEVEL CACHE REQUEST TRACKING - An apparatus and method for multi-level cache request tracking. For example, one embodiment of a processor comprises: one or more cores to execute instructions and process data; a memory subsystem comprising a system memory and a multi-level cache hierarchy; a primary tracker to store a first entry associated with a memory request to transfer a cache line from the system memory or a first cache within the cache hierarchy to a second cache; primary tracker allocation circuitry to allocate and deallocate entries within the primary tracker; a secondary tracker to store a second entry associated with the memory request; secondary tracker allocation circuitry to allocate and deallocate entries within the secondary tracker; the primary tracker allocation circuitry to deallocate the first entry in response to a first indication that one or more cache coherence requirements associated with the cache line have been resolved, the secondary tracker allocation circuitry to deallocate the second entry in response to a second indication related to transmission of the cache line to the second cache. | 2019-04-04 |
20190102301 | TECHNOLOGIES FOR ENFORCING COHERENCE ORDERING IN CONSUMER POLLING INTERACTIONS - Technologies for enforcing coherence ordering in consumer polling interactions include a network interface controller (NIC) of a target computing device which is configured to receive a network packet, write the payload of the network packet to a data storage device of the target computing device, and obtain, subsequent to having transmitted a last write request to write the payload to the data storage device, ownership of a flag cache line of a cache of the target computing device. The NIC is additionally configured to receive a snoop request from a processor of the target computing device, identify whether the received snoop request corresponds to a read flag snoop request associated with an active request being processed by the NIC, and hold the received snoop request for delayed return in response to having identified the received snoop request as the read flag snoop request. Other embodiments are described herein. | 2019-04-04 |
20190102302 | PROCESSOR, METHOD, AND SYSTEM FOR CACHE PARTITIONING AND CONTROL FOR ACCURATE PERFORMANCE MONITORING AND OPTIMIZATION - Processor, method, and system for tracking partition-specific statistics across cache partitions that apply different cache management policies is described herein. One embodiment of a processor includes: a cache; a cache controller circuitry to partition the cache into a plurality of cache partitions based on one or more control addresses; a cache policy assignment circuitry to apply different cache policies to different subsets of the plurality of cache partitions; and a cache performance monitoring circuitry to track cache events separately for each of the cache partitions and to provide partition-specific statistics to allow comparison between the plurality of cache partitions as a result of applying the different cache policies in a same time period. | 2019-04-04 |
20190102303 | SOFTWARE-TRANSPARENT HARDWARE PREDICTOR FOR CORE-TO-CORE DATA TRANSFER OPTIMIZATION - Apparatus, method, and system for implementing a software-transparent hardware predictor for core-to-core data communication optimization are described herein. An embodiment of the apparatus includes a plurality of hardware processor cores each including a private cache; a shared cache that is communicatively coupled to and shared by the plurality of hardware processor cores; and a predictor circuit. The predictor circuit is to track activities relating to a plurality of monitored cache lines in the private cache of a producer hardware processor core (producer core) and to enable a cache line push operation upon determining a target hardware processor core (target core) based on the tracked activities. An execution of the cache line push operation is to cause a plurality of unmonitored cache lines in the private cache of the producer core to be moved to the private cache of the target core. | 2019-04-04 |
20190102304 | METHOD AND APPARATUS FOR CACHE PRE-FETCH WITH OFFSET DIRECTIVES - A method and apparatus for pre-fetching data into a cache using a hardware element that includes registers for receiving a reference for an initial pre-fetch and a stride-indicator. The initial pre-fetch reference allows for direct pre-fetch of a first portion of memory. A stride-indicator is also received and is used along with the initial pre-fetch reference in order to generate a new pre-fetch reference. The new pre-fetch reference is used to fetch a second portion of memory. | 2019-04-04 |
20190102305 | METHOD AND ELECTRONIC DEVICE FOR ACCESSING DATA - Various embodiments of the present disclosure generally relate to a method and an electronic device for reading data. Specifically, the method comprises receiving a request for reading the target data, and in response to the request, searching for the target data by searching a data index generated in a cache for reading data. The method further comprises in response to the target data being found, providing the target data. A corresponding system, device and computer program product are also provided. | 2019-04-04 |
20190102306 | MAINTAINING TRACK FORMAT METADATA FOR TARGET TRACKS IN A TARGET STORAGE IN A COPY RELATIONSHIP WITH SOURCE TRACKS IN A SOURCE STORAGE - Provided area computer program product, system, and method for maintaining track format metadata for target tracks in a target storage in a copy relationship with source tracks in a source storage. Upon receiving a request to a requested target track in the target storage, the source track for the requested target track is staged from the source storage to a cache to be used as the requested target track in response to determining that the copy relationship information indicates that a source track needs to be copied to the requested target track. A determination is made of track format metadata for the requested target track, comprising the staged source track, indicating a format and layout of data in the requested target track and a track format code identifying the track format metadata. The track format code is included in a cache control block for the requested target track. | 2019-04-04 |
20190102307 | CACHE TRANSFER TIME MITIGATION - In accordance with one implementation, a method for mitigating cache transfer time entails reading data into memory from at least two consecutive elliptical data tracks in a main store region of data storage and writing the data read from the at least two consecutive elliptical data tracks to a spiral data track within a cache storage region. | 2019-04-04 |
20190102308 | METHOD AND DEVICES FOR MANAGING CACHE - Embodiments of the present disclosure relate to a method and apparatus for managing cache. The method comprises determining a cache flush time period of the cache for a lower-layer storage device associated with the cache. The method further comprises: in response to a length of the cache flush time period being longer than a threshold length of time, in response to receiving a write request, determining whether data associated with the write request has been stored into the cache. The method further comprises: in response to a miss of the data in the cache, storing the write request and the data in the cache without returning a write completion message for the write request. | 2019-04-04 |
20190102309 | NV CACHE - Data blocks are cached in a persistent cache (“NV cache”) allocated from as non-volatile RAM (“NVRAM”). The data blocks may be accessed in place in the NV cache of a “source” computing element by another “remote” computing element over a network using remote direct memory access (“RMDA”). In order for a remote computing element to access the data block in NV cache on a source computing element, the remote computing element needs the memory address of the data block within the NV cache. For this purpose, a hash table is stored and maintained in RAM on the source computing element. The hash table identifies the data blocks in the NV cache and specifies a location of the cached data block within the NV cache. | 2019-04-04 |
20190102310 | METHOD AND APPARATUS FOR CONTROL OF A TIERED MEMORY SYSTEM - A method and apparatus for controlling data organization in a tiered memory system, where the system comprises a lower and higher bandwidth memories. Accesses to the tiered memory system by an action of a computing device in a first time interval are monitored to determine a first measure of bandwidth utilization, from which it is determined if the action is in a high bandwidth phase for which a first measure of bandwidth utilization is greater than an upper value. It is further determined, from confidence counters, if a monitored access is consistent with respect to the first instructions or with respect to a memory address of the access. Data associated with the access is moved from the lower bandwidth memory to the higher bandwidth memory when the action is in a high bandwidth phase, the access is consistent, and bandwidth utilization of the higher bandwidth memory is below a threshold. | 2019-04-04 |
20190102311 | ACCELERATOR FABRIC - A fabric controller to provide a coherent accelerator fabric, including: a host interconnect to communicatively couple to a host device; a memory interconnect to communicatively couple to an accelerator memory; an accelerator interconnect to communicatively couple to an accelerator having a last-level cache (LLC); and an LLC controller configured to provide a bias check for memory access operations. | 2019-04-04 |
20190102312 | LAZY INCREMENT FOR HIGH FREQUENCY COUNTERS - A computing apparatus, including: a processor; a pointer to a counter memory location; and a lazy increment counter engine to: receive a stimulus to update the counter; and lazy increment the counter including issuing a weakly-ordered increment directive to the pointer. | 2019-04-04 |
20190102313 | TECHNIQUES TO STORE DATA FOR CRITICAL CHUNK OPERATIONS - Various embodiments are generally directed to techniques to store data for critical chunk operations, such as by utilizing a spare lane, for instance. Some embodiments are particularly directed to a memory controller that stores a portion of a critical chunk in a spare lane to enable the entire critical chunk to be stored in a half of the cache line. | 2019-04-04 |
20190102314 | TAG CACHE ADAPTIVE POWER GATING - An embodiment of a semiconductor package apparatus may include technology to determine a workload characteristic for a tag cache, and adjust a power parameter for the tag cache based on the workload characteristic. Other embodiments are disclosed and claimed. | 2019-04-04 |
20190102315 | TECHNIQUES TO PERFORM MEMORY INDIRECTION FOR MEMORY ARCHITECTURES - Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation. | 2019-04-04 |
20190102316 | MEMORY SYSTEM WITH CORE DIES STACKED IN VERTICAL DIRECTION - A memory system includes N core dies (N: an integer greater than one) stacked in a vertical direction and including N respective memory circuits having a same structure, a control circuit configured to supply N write data to the N respective memory circuits, an address generating circuit configured to generate a single common address as write addresses at which the N write data are to be stored in the N respective memory circuits, and an address conversion circuit configured to convert the single common address to generate N addresses which are different for the N respective memory circuits and to supply the N addresses as write addresses to the N respective memory circuits. | 2019-04-04 |
20190102317 | TECHNOLOGIES FOR FLEXIBLE VIRTUAL FUNCTION QUEUE ASSIGNMENT - Technologies for I/O device virtualization include a computing device with an I/O device that includes a physical function, multiple virtual functions, and multiple assignable resources, such as I/O queues. The physical function assigns an assignable resource to a virtual function. The computing device configures a page table mapping from a virtual function memory page located in a configuration space of the virtual function to a physical function memory page located in a configuration space of the physical function. The virtual function memory page includes a control register for the assignable resource, and the physical function memory page includes another control register for the assignable resource. A value may be written to the control register in the virtual function memory page. A processor of the computing device translates the virtual function memory page to the physical function memory page using the page mapping. Other embodiments are described and claimed. | 2019-04-04 |
20190102318 | Cache Memory That Supports Tagless Addressing - The disclosed embodiments relate to a computer system with a cache memory that supports tagless addressing. During operation, the system receives a request to perform a memory access, wherein the request includes a virtual address. In response to the request, the system performs an address-translation operation, which translates the virtual address into both a physical address and a cache address. Next, the system uses the physical address to access one or more levels of physically addressed cache memory, wherein accessing a given level of physically addressed cache memory involves performing a tag-checking operation based on the physical address. If the access to the one or more levels of physically addressed cache memory fails to hit on a cache line for the memory access, the system uses the cache address to directly index a cache memory, wherein directly indexing the cache memory does not involve performing a tag-checking operation and eliminates the tag storage overhead. | 2019-04-04 |
20190102319 | MEMORY CONTROLLER, MEMORY SYSTEM, INFORMATION PROCESSING SYSTEM, MEMORY CONTROL METHOD, AND PROGRAM - To reduce a capacity of a buffer included in a memory controller for managing a replacement area of a memory. Replacement management information for managing a relationship between a predetermined data area of a memory and a replacement area corresponding to the data area is stored in the memory. A memory controller includes: a replacement management information buffer configured to hold part of the replacement management information. A replacement processing unit, in a case in which replacement has occurred in the memory for data related to an access command from a host computer to the memory, causes the replacement management information buffer to hold the replacement management information of a portion of the data for which the replacement has occurred. | 2019-04-04 |
20190102320 | TIME TRACKING WITH PATROL SCRUB - One embodiment provides a memory controller. The memory controller includes a memory controller memory; a timestamp circuitry and a demarcation voltage (VDM) selection circuitry. The timestamp circuitry is to capture a current timer index from a timer circuitry in response to an initiation of a periodic patrol scrub and to compare the current timer index to a stored timestamp. The VDM selection circuitry is to update a state of a sub-block of a memory array, if the state is less than a threshold and a difference between the current timer index and the stored timestamp is nonzero. The timestamp circuitry is further to store the current timer index as a new timestamp. | 2019-04-04 |
20190102321 | TECHNIQUES TO PROVIDE ACCESS PROTECTION TO SHARED VIRTUAL MEMORY - Various embodiments are generally directed to techniques for shared virtual memory (SVM) access protection, such as by performing a security check whenever a write request arrives from an SVM device, for instance. Some embodiments are particularly directed to an input/output memory management unit (IOMMU) that prevents an SVM device from modifying a code page with a memory transaction request by generating an access request fault and/or a translation completion with read-only access in response to the memory transaction request. | 2019-04-04 |
20190102322 | CROSS-DOMAIN SECURITY IN CRYPTOGRAPHICALLY PARTITIONED CLOUD - Solutions for secure memory access in a computing platform, include a multi-key encryption (MKE) engine as part of the memory interface between processor core(s) and memory of a computing platform. The processor core(s) perform workloads, each utilizing allocated portions of memory. The MKE engine performs key-based cryptography operations on data to isolate portions of the memory from workloads to which those portions of the memory are not allocated. A key-mapping data store is accessible to the MKE engine and contains associations between identifiers of portions of the memory, and corresponding key identification data from which cryptographic keys are obtained. A key tracking log is maintained by the MKE engine, and the MKE engine temporarily stores entries in the key tracking log containing the identifiers of the portions of the memory and key identification data for those portions of memory during memory-access operations of those portions of memory. | 2019-04-04 |
20190102323 | VERIFICATION BIT FOR ONE-WAY ENCRYPTED MEMORY - An embodiment of a semiconductor package apparatus may include technology to identify a first encrypted memory alias corresponding to a first portion of memory based on a verification indicator, where the first portion is decryptable and readable by both a privileged component and an unprivileged component, and identify a second encrypted memory alias corresponding to a second portion of memory based on the verification indicator, where the second portion is accessible by only the unprivileged component. Other embodiments are disclosed and claimed. | 2019-04-04 |
20190102324 | CACHE BEHAVIOR FOR SECURE MEMORY REPARTITIONING SYSTEMS - Cache behavior for secure memory repartitioning systems is described. Implementations may include a processing core and a memory controller coupled between the processor core and a memory device. The processor core is to receive a memory access request to a page in the memory device, the memory access request comprising a first guarded attribute (GA) indicator indicating whether the page is a secure page belonging to an enclave, determine whether the first GA indicator matches a second GA indicator in a cache line entry corresponding to the page, the cache line entry comprised in a cache, and responsive to a determination that the first GA indicator does not match the second GA indicator, apply an eviction policy to the cache line entry based on whether the cache line is indicated as a dirty cache line and accessing second data in the memory device for the page. | 2019-04-04 |
20190102325 | MEMORY CONTROL MANAGEMENT OF A PROCESSOR - Systems, apparatuses and methods may provide for technology that conducts a comparison between an identified capability of a memory device and memory usage rules associated with a processor. The memory usage rules are to identify allowed memory accesses by the processor. The technology further limits access by the processor to the memory device based upon the comparison. | 2019-04-04 |
20190102326 | METHOD, APPARATUS, SYSTEM FOR EARLY PAGE GRANULAR HINTS FROM A PCIE DEVICE - Aspects of the embodiments are directed to systems and methods for providing and using hints in data packets to perform memory transaction optimization processes prior to receiving one or more data packets that rely on memory transactions. The systems and methods can include receiving, from a device connected to the root complex across a PCIe-compliant link, a data packet; identifying from the received device a memory transaction hint bit; determining a memory transaction from the memory transaction hint bit; and performing an optimization process based, at least in part, on the determined memory transaction. | 2019-04-04 |
20190102327 | CALIBRATION PROTOCOL FOR COMMAND AND ADDRESS BUS VOLTAGE REFERENCE IN LOW-SWING SINGLE-ENDED SIGNALING - A single-ended receiver is coupled to an input-output (I/O) pin of a command and address (CA) bus. The receiver is configurable with dual-mode I/O support to operate the CA bus in a low-swing mode and a high-swing mode. The receiver is configurable to receive a first command on the I/O pin while in the high-swing mode, initiate calibration of the slave device to operate in the low-swing mode in response to the first command, switch the slave device to operate in the low-swing mode while the CA bus remains active, and to receive a second command on the I/O pin while in the low-swing mode. | 2019-04-04 |
20190102328 | DETECTION OF A TIME CONDITION RELATIVE TO A TWO-WIRE BUS - A value representative of a duration of the low state of a synchronization signal on a bus is measured and then compared with a threshold value. The threshold value is stored in a memory and the measured value represents, in a first comparison, a longest duration of the low states of the synchronization signal. | 2019-04-04 |
20190102329 | ADAPTIVE BUFFERING OF DATA RECEIVED FROM A SENSOR - In a method of adaptive buffering in a mobile device having a host processor and a sensor processor coupled with the host processor, the sensor processor is used to buffer data received from a sensor that is operated by the sensor processor. The data is buffered by the sensor processor into a circular data buffer. Responsive to the sensor processor detecting triggering data within the received data: a first adaptive data buffering action is initiated with respect to the data received from the sensor operated by the sensor processor; a second adaptive data buffering action is initiated with respect to second data received from a second sensor of the mobile device; and a command is sent from the sensor processor to a second processor. | 2019-04-04 |
20190102330 | COMMUNICATING DATA WITH STACKED MEMORY DIES - Methods, systems, and devices for communicating data with stacked memory dies are described. A first semiconductor die may communicate with an external computing device using a binary-symbol signal including two signal levels representing one bit of data. Semiconductor dies may be stacked on one another and include internal interconnects (e.g., through-silicon vias) to relay an internal signal generated based on the binary-symbol signal. The internal signal may be a multi-symbol signal modulated using a modulation scheme that includes three or more levels to represent more than one bit of data. The multi-level symbol signal may simplify the internal interconnects. A second semiconductor die may be configured to receive and re-transmit the multi-level symbol signal to semiconductor dies positioned above the second semiconductor die. | 2019-04-04 |
20190102331 | MEMORY CHANNEL HAVING MORE THAN ONE DIMM PER MOTHERBOARD DIMM CONNECTOR - A method is described. The method includes receiving DDR memory channel signals from a motherboard through a larger DIMM motherboard connector. The method includes routing the signals to one of first and second smaller form factor connectors. The method includes sending the DDR memory channel signals to a DIMM that is connected to the one of the first and second smaller form factor connectors. | 2019-04-04 |