Patent application number | Description | Published |
20150096038 | COLLISION AVOIDANCE IN A DISTRIBUTED TOKENIZATION ENVIRONMENT - A client receives sensitive data to be tokenized. The client queries a token table with a portion of the sensitive data to determine if the token table includes a token mapped to the value of the portion of the sensitive data. If the mapping table does not include a token mapped to the value of the portion of the sensitive data, a candidate token is generated. The client queries a central token management system to determine if the candidate token collides with a token generated by or stored at another client. In some embodiments, the candidate token includes a value from a unique set of values assigned by the central token management system to the client, guaranteeing that the candidate token does not cause a collision. The client then tokenizes the sensitive data with the candidate token and stores the candidate token in the token table. | 04-02-2015 |
20150096039 | DYNAMIC TOKENIZATION WITH MULTIPLE TOKEN TABLES - Sensitive data is accessed by a tokenization system. The sensitive data includes a first portion and a second portion. A token table is selected from a plurality of dynamic token tables based on the second portion of the received data. The selected token table is queried with the first portion of the sensitive data. If the selected token table includes a token mapped to the value of the first portion of the sensitive data, the first portion of the sensitive data is replaced with the token to form tokenized data. If the selected token table does not include a token mapped to the value of the first portion of the sensitive data, a token is generated, the sensitive data is tokenized with the generated token, and the generated token and association with the value of the first portion of the sensitive data is stored in the selected token table. | 04-02-2015 |
20150096056 | COLLISION AVOIDANCE IN A DISTRIBUTED TOKENIZATION ENVIRONMENT - A client receives sensitive data to be tokenized. The client queries a token table with a portion of the sensitive data to determine if the token table includes a token mapped to the value of the portion of the sensitive data. If the mapping table does not include a token mapped to the value of the portion of the sensitive data, a candidate token is generated. The client queries a central token management system to determine if the candidate token collides with a token generated by or stored at another client. In some embodiments, the candidate token includes a value from a unique set of values assigned by the central token management system to the client, guaranteeing that the candidate token does not cause a collision. The client then tokenizes the sensitive data with the candidate token and stores the candidate token in the token table. | 04-02-2015 |
20150317492 | Collision Avoidance in a Distributed Tokenization Environment - A client receives sensitive data to be tokenized. The client queries a token table with a portion of the sensitive data to determine if the token table includes a token mapped to the value of the portion of the sensitive data. If the mapping table does not include a token mapped to the value of the portion of the sensitive data, a candidate token is generated. The client queries a central token management system to determine if the candidate token collides with a token generated by or stored at another client. In some embodiments, the candidate token includes a value from a unique set of values assigned by the central token management system to the client, guaranteeing that the candidate token does not cause a collision. The client then tokenizes the sensitive data with the candidate token and stores the candidate token in the token table. | 11-05-2015 |
Patent application number | Description | Published |
20120080432 | CONTAINER ASSEMBLY - A container system having a base unit and a removable container unit. The base unit includes a base container portion and a base handle having a first end portion attached to at least one of the opposing side walls of the base container portion, a second end portion attached to at least another of the opposing side walls of the base container portion, and an extending portion extending between the first end portion and the second end portion. The base unit also includes a first latch region. The removable container unit includes a second latch region constructed and arranged to latch with the first latch region. The removable container unit prevents access to the opening of the base unit when the removable container unit is attached to the base unit and permits access to the opening of the base unit when the removable container unit is removed from the base unit. | 04-05-2012 |
20120152944 | SEALABLE STORAGE CONTAINER - A storage container has a container body with an upwardly facing opening and a lid connected to the container body that is pivotally movable between open and closed positions. At least one latch is provided at each of the front side, left side, and right side of the container body to secure the lid to the container body. A seal structure is disposed at an interface between the lid and an upper portion of the container. The seal structure generally surrounds the upwardly facing opening and is compressed when the lid is latched in the closed position by the latches. The container body may also include a plurality of compartments each with upper edges defining an upwardly facing opening. The upper edges can be positioned to engage an inner surface of the lid when the lid is in the closed position so that the lid closes off the compartments. | 06-21-2012 |
20130048588 | SHELVING SYSTEM - A shelving system having tubular frame members and plastic shelves, the shelves having frame receiving regions for receiving ends of the frame members. The shelves and frame members are connectable by insertion of the frame members into the frame receiving regions to form an openly configured, assembled shelving unit in which the shelves are connected to one another in vertically spaced relationship by the frame members. The shelving system has at least one closure member. The assembled shelving system and the at least one closure member both have an integrally molded connector structure. The integrally molded connector structures enable the at least one molded plastic closure member to be connected to the assembled shelving unit after the shelving unit has been assembled. | 02-28-2013 |
Patent application number | Description | Published |
20110161752 | ROBUST MEMORY LINK TESTING USING MEMORY CONTROLLER - REUT (Robust Electrical Unified Testing) for memory links is introduced which speeds testing, tool development, and debug. In addition it provides training hooks that have enough performance to be used by BIOS to train parameters and conditions that have not been possible with past implementations. Address pattern generation circuitry is also disclosed. | 06-30-2011 |
20140059287 | ROW HAMMER REFRESH COMMAND - A memory controller issues a targeted refresh command. A specific row of a memory device can be the target of repeated accesses. When the row is accessed repeatedly within a time threshold (also referred to as “hammered” or a “row hammer event”), physically adjacent row (a “victim” row) may experience data corruption. The memory controller receives an indication of a row hammer event, identifies the row associated with the row hammer event, and sends one or more commands to the memory device to cause the memory device to perform a targeted refresh that will refresh the victim row. | 02-27-2014 |
20140085995 | METHOD, APPARATUS AND SYSTEM FOR DETERMINING A COUNT OF ACCESSES TO A ROW OF MEMORY - Techniques and mechanisms for determining a count of accesses to a row of a memory device. In an embodiment, the memory device includes a counter comprising circuitry to increment a value of the count in response to detecting a command to activate the row. Circuitry of counter may further set a value of the count to a baseline value in response to detecting a command to refresh the row. In another embodiment, the memory device includes evaluation logic to compare a value of the count to a threshold value. A signal is generated based on the comparison to indicate whether a row hammer event for the row is indicated. | 03-27-2014 |
20140089576 | METHOD, APPARATUS AND SYSTEM FOR PROVIDING A MEMORY REFRESH - A memory controller to implement targeted refreshes of potential victim rows of a row hammer event. In an embodiment, the memory controller receives an indication that a specific row of a memory device is experiencing repeated accesses which threaten the integrity of data in one or more victim rows physically adjacent to the specific row. The memory controller accesses default offset information in the absence of address map information which specifies an offset between physically adjacent rows of the memory device. In another embodiment, the memory controller determines addresses for potential victim rows based on the default offset information. In response to the received indication of the row hammer event, the memory controller sends for each of the determined plurality of addresses a respective command to the memory device, where the commands are for the memory device to perform targeted refreshes of potential victim rows. | 03-27-2014 |
20140156935 | Unified Exclusive Memory - In one embodiment, a processor includes at least one execution unit, a near memory, and memory management logic. The memory management logic may be to manage the near memory and a far memory as a unified exclusive memory, where the far memory is external to the processor. Other embodiments are described and claimed. | 06-05-2014 |
20140157055 | MEMORY SUBSYSTEM COMMAND BUS STRESS TESTING - A memory subsystem includes logic buffer coupled to a command bus between a memory controller and a memory device. The logic buffer detects that the memory controller places the command bus in a state where the memory controller does not drive the command bus with a valid executable memory device command. In response to detecting the state of the command bus, the logic buffer generates a signal pattern and injects the signal pattern on the command bus after a scheduler of the memory controller to drive the command bus with the signal pattern. | 06-05-2014 |
20140189224 | TRAINING FOR MAPPING SWIZZLED DATA TO COMMAND/ADDRESS SIGNALS - Data pin mapping and delay training techniques. Valid values are detected on a command/address (CA) bus at a memory device. A first part of the pattern (high phase) is transmitted via a first subset of data pins on the memory device in response to detecting values on the CA bus; a second part of the pattern (low phase) is transmitted via a second subset of data pins on the memory device in response to detecting values on the CA bus. Signals are sampled at the memory controller from the data pins while the CA pattern is being transmitted to obtain a first memory device's sample (high phase) and the second memory device's sample (low phase) by analyzing the first and the second subset of sampled data pins. The analysis combined with the knowledge of the transmitted pattern on the CA bus leads to finding the unknown data pins mapping. Varying the transmitted CA patterns and the resulting feedbacks sampled on memory controller data signals allows CA/CTRL/CLK signals delay training with and without priory data pins mapping knowledge. | 07-03-2014 |
20140223197 | METHOD AND APPARATUS FOR MEMORY ENCRYPTION WITH INTEGRITY CHECK AND PROTECTION AGAINST REPLAY ATTACKS - A method and apparatus to provide cryptographic integrity checks and replay protection to protect against hardware attacks on system memory is provided. A mode of operation for block ciphers enhances the standard XTS-AES mode of operation to perform memory encryption by extending a tweak to include a “time stamp” indicator. A tree-based replay protection scheme uses standard XTS-AES to encrypt contents of a cache line in the system memory. A Message-Authentication Code (MAC) for the cache line is encrypted using enhanced XTS-AES and a “time stamp” indicator associated with the cache line. The “time stamp indicator” is stored in a processor. | 08-07-2014 |
20150039790 | DYNAMIC PRIORITY CONTROL BASED ON LATENCY TOLERANCE - A dynamic priority controller monitors a level of data in a display engine buffer and compares the level of data in the display engine buffer to a plurality of thresholds including a first threshold and a second threshold. When the level of data in the display engine buffer is less than or equal to the first threshold, the dynamic priority controller increases a priority for processing display engine data in a communication channel. When the level of data in the display engine buffer is greater than or equal to the second threshold, the dynamic priority controller decreases the priority for processing the display engine data in the communication channel. | 02-05-2015 |
20150378919 | SELECTIVE PREFETCHING FOR A SECTORED CACHE - A memory subsystem includes memory hierarchy that performs selective prefetching based on prefetch hints. A lower level memory detects a cache miss for a requested cache line that is part of a superline. The lower level memory generates a request vector for the cache line that triggered the cache miss, including a field for each cache line of the superline. The request vector includes a demand request for the cache line that caused the cache miss, and the lower level memory modifies the request vector with prefetch hint information. The prefetch hint information can indicate a prefetch request for one or more other cache lines in the superline. The lower level memory sends the request vector to the higher level memory with the prefetch hint information, and the higher level memory services the demand request and selectively either services a prefetch hint or drops the prefetch hint. | 12-31-2015 |