Patent application number | Description | Published |
20130063968 | PLANAR FRONT ILLUMINATION SYSTEM HAVING A LIGHT GUIDE WITH MICRO SCATTERING FEATURES FORMED THEREON AND METHOD OF MANUFACTURING THE SAME - A system for illuminating a reflective display or other material from a planar front device and a method of manufacture thereof. The system includes a light guide plate that conducts light from an edge light source across the face of a reflective display. Micro scattering features are formed on an outer surface of the light guide, farthest from the reflective display or material. A stepped index layer is formed on the surface of light guide plate containing the micro scattering features. The stepped index layer has an index of refraction lower than an index of refraction of the light guide plate to assist in the total internal reflection of light injected into the light guide plate. The micro scattering features, light reflecting areas, redirect luminous flux toward the display. In one embodiment, the micro scattering features are formed as white dots on the light guide plate. A black absorbing layer can be added to each white scattering dot in order to improve the apparent contrast when the front light is deactivated. | 03-14-2013 |
20130063969 | PLANAR FRONT ILLUMINATION SYSTEM HAVING A LIGHT GUIDE WITH MICRO LENSES FORMED THEREON AND METHOD OF MANUFACTURING THE SAME - A system for illuminating a reflective display or other material from a planar front device and a method of manufacture thereof. The system includes a light guide plate that conducts light from an edge light source across the face of a reflective display. Micro lenses are formed on the inner or outer surface of the light guide and direct the light conducted in the light guide toward the display. A stepped index layer is formed on the surface of light guide plate containing the micro lenses. The stepped index layer has an index of refraction lower than an index of refraction of the light guide plate to assist in the total internal reflection of light injected into the light guide plate. A top layer protective coat or touch screen can be laminated to the outside of the light guide plate. | 03-14-2013 |
20130100074 | PEN INTERFACE FOR A TOUCH SCREEN DEVICE - A system and method that allows pen input and touch input to better co-exist during writing on a touch screen device, such as a tablet device. If the pen/stylus is detected as present and is pointed at the surface, inputs initiated by a user's finger (or other) touch are rejected and pen inputs are allowed. If the pen is detected as present, but is pointed away from the writing surface of the touch screen, stylus/pen inputs are rejected and touch inputs are allowed. If pen is not detected as present, the system ignores all pen inputs. Erasing functions are also provided. The size of the eraser can be made proportional to the pressure level, contact size or signal level of the pen or the user's finger performing the erasing. | 04-25-2013 |
20150100874 | UI TECHNIQUES FOR REVEALING EXTRA MARGIN AREA FOR PAGINATED DIGITAL CONTENT - Techniques are disclosed for revealing extra margin area for paginated digital content, referred to herein as an extra margins mode. For example, the extra margins mode may be used to reveal/expose extra margin area (galley area) at the perimeter of one or more pages of an eBook or a photo of a photo album. Once galley area is exposed, a user can add content to the galley area, such as annotations using a stylus. In some cases, the extra margins mode may be configured to expose galley area for one or more pages in response to a reveal command input, such as a pinch gesture, a drag gesture, or an inward flick gesture from near the edge of a page using a stylus. The extra margins mode may also be configured to hide exposed galley areas in response to a hide command input, such as spread gesture. | 04-09-2015 |
20150100876 | ANNOTATION OF DIGITAL CONTENT VIA SELECTIVE FIXED FORMATTING - Techniques are disclosed for providing a fixed format viewing mode in electronic computing devices. The fixed format viewing mode may be engaged upon receiving virtual ink annotations at the touch screen of the electronic device. The annotations may be input using an active stylus. Upon receiving virtual ink annotations, the current page of digital content may be converted into a fixed format page wherein the formatting characteristics are held constant and the annotations remain in the same location with respect to the underlying digital content. Formatting characteristics for other pages of the digital content may be altered, however the fixed format page maintains the same format as when the annotations were added. The user may hide and/or edit virtual ink annotations, and when the annotations are hidden the content of the fixed format page may flow normally and match the formatting characteristics of the rest of the digital content. | 04-09-2015 |
Patent application number | Description | Published |
20090290660 | Pseudo Noise Coded Communication Systems - Systems, apparatus and methods for acquiring code phase and multipath channel models in communication device. A fast Walsh transform engine is used to acquire a pseudo noise code phase and the pseudo noise code bit rate of a radiofrequency signal that has been broadcast. Multipath filter coefficients from the pseudo noise code phase and the pseudo noise code bit rate are recovered. A pseudo noise generator is initialized with the pseudo noise code phase acquired during the fast Walsh transform step. The pseudo noise code phase and pseudo noise code bit rate are tracked by a phase locked loop so that communication with the radiofrequency signal is maintained. Then, the received noise code phase and pseudo noise code bit rate are despread so that any data in the radiofrequency signal is recovered. | 11-26-2009 |
20100265168 | LOW POWER ACTIVE MATRIX DISPLAY - Described herein are systems and methods for the reduction of power consumption and mitigation of device stress accumulation in low frequency refreshed Liquid Crystal Displays (LCDs). In an exemplary embodiment, two or more transistors in series are used to hold charge on an LCD pixel. To prevent negative stress on the transistors, the transistors are alternately driven to an “on” state so that no one transistor sees a long “off” time. In another embodiment, circuits and signaling waveforms for performing frame writing and stress mitigation are provided that minimize dynamic power consumption and static power consumption in peripheral ESD circuits. | 10-21-2010 |
20160035301 | ACTIVE MATRIX DISPLAY WITH ADAPTIVE CHARGE SHARING - Techniques are disclosed for controlling an active matrix display using adaptive charge sharing. Separate voltages applied to each column control line correspond to the image data for setting the grayscale level of the respective pixels. Image data for one row of pixels are compared to the image data for another row of pixels on a column-by-column basis. If the image data for one or more columns changes from one row to the next by more than a threshold amount, charge sharing is activated between updating different rows for controlling the corresponding column control lines of the display. Charge sharing may be implemented by adding one or more capacitors that are switched with a block of column control lines, and a processor configured to compare the image data for successive rows, determine whether the criteria for charge sharing are met, and control the switches based on the determination. | 02-04-2016 |
Patent application number | Description | Published |
20140306477 | STORAGE BOX FOR A MOTOR VEHICLE - A center console for a motor vehicle is disclosed in which the center console includes a front portion and a rear storage box portion defined by a pair of side walls, a rear wall, a floor, an armrest and a front wall. The front wall of the storage box is moveable between substantially upright and substantially horizontal positions so as to define two distinct storage configurations. When the front wall is in the upright position a single large closed storage box is defined by the pair of side walls, the rear wall, the floor, the armrest and the front wall. When the front wall is in the horizontal position a first large open storage area is defined by the pair of side walls, the rear wall, the armrest and the front wall. A reconfigurable storage facility is thereby provided in a cost effective and simple manner. | 10-16-2014 |
20150203044 | ROLL COVER ASSEMBLY - A roll cover assembly for covering a cargo space of a vehicle includes a cassette housing an extendable/retractable roll cover, a first receptacle adjacent a first side wall of the cargo space and having a channel configured to slidably guide a first end of the cassette during insertion or removal, a second receptacle adjacent a second side wall of the cargo space and having an opening configured to receive a second end of the cassette and permit pivoting of the cassette about the second end during removal of the first end from the first receptacle, a locking mechanism releasably locking the first end of the cassette into the first receptacle, and a release device actuatable to release the locking mechanism by pulling along a line-of-action parallel with a direction in which the first end of the cassette moves during removal from the first receptacle. | 07-23-2015 |
20150251606 | STORAGE UNIT FOR A MOTOR VEHICLE - A storage assembly for use in a motor vehicle for transporting objects is disclosed. The storage assembly includes upper and lower cover parts that are hingedly connected together and also hingedly connected, via the lower cover part, to an interior panel of the motor vehicle, such as a sidewall of a center console. When the storage assembly is in a stowed configuration, the two upper and lower cover parts are aligned with one another so as to lie substantially flush against the sidewall of the center console. When the storage assembly is in a fully deployed configuration, the upper and lower cover parts are arranged to form a substantially L-shaped support for transporting one or more objects. | 09-10-2015 |
20160023586 | INTERNAL VEHICLE DOCKING ARM AND STORAGE - A lift assembly for a vehicle includes a bracket configured to be fixed to the vehicle. A lift arm is pivotally coupled to the bracket. A mating structure is supported by the lift arm. The mating structure is configured to lift a personal mobility device and includes an electrical connector for electrically mating to an electrical connector of the personal mobility device. | 01-28-2016 |
20160031513 | ELECTRIC BICYCLE - A bicycle includes a handlebar and a vibration generator supported on the handlebar. A sensor is configured to detect an overcoming vehicle. A controller is configured to activate the vibration generator when the sensor detects an overcoming vehicle. | 02-04-2016 |
Patent application number | Description | Published |
20160031506 | ELECTRIC BICYCLE - An apparatus includes a bicycle frame defining a bore, and a bicycle seat assembly including a post received in the bore. An indicating unit is configured to indicate a first position of the post along the bore for a first user and a second position of the post along the bore for a second user. An identification unit is configured to identify the first user and the second user. A computing device includes a processor and a memory, the memory storing instruction such that the processor is programmed to instruct the indicating unit to indicate the first position when the identification unit identifies the first user and to instruct the indicating unit to indicate the second position when the identification unit identifies the second user. | 02-04-2016 |
20160031507 | ELECTRIC BICYCLE - A bicycle includes a frame and a seat post removably engaged with the frame. A light source is supported by the seat post. A power source is supported by the seat post. A switch is disposed between the light source and the power source. | 02-04-2016 |
20160031514 | ELECTRIC BICYCLE - A bicycle includes a computing device including a processor and a memory. The processor is programmed to communicate with a user input device of a vehicle when the bicycle is docked to the vehicle. The processor is programmed to communicate with a mobile device when the bicycle is undocked from the vehicle. | 02-04-2016 |
20160031516 | ELECTRIC BICYCLE - A bicycle includes a frame including first and second segments pivotable relative to each other between a folded position and an unfolded position. A magnet is fixed relative to the first segment. An electromagnet is fixed relative to the second segment and disposed in a magnetic field of the magnet when the frame is in the folded position. A controller is configured to power the electromagnet to repel the magnet to unfold the frame. | 02-04-2016 |
20160031517 | ELECTRIC BICYCLE - A bicycle includes a frame including first and second segments pivotable relative to each other between a folded position and an unfolded position. The first segment has a first face and the second segment has a second face facing the first face in the unfolded position. A hinge is disposed between the first and second faces. A locking device extends through at least one of the first and second faces. | 02-04-2016 |
20160031524 | ELECTRIC BICYCLE - An electric bicycle includes a motor and a controller. A heart rate monitor is programmed to communicate a heart rate signal representing a heart rate of an occupant to the controller. The controller is programmed to receive a destination distance of the electric bicycle relative to a predetermined destination. The controller is programmed to provide instruction to adjust power to the motor based at least on the heart rate signal and the destination distance. | 02-04-2016 |
20160031525 | ELECTRIC BICYCLE - An electric bicycle includes a crank, a driving gear and a driven gear spaced from each other, and a belt engaged with the driving gear and the driven gear. An electric motor is coupled to the driving gear. A freewheel is disposed between the crank and the driving gear. At least a portion of the motor is concentric about the freewheel. | 02-04-2016 |
Patent application number | Description | Published |
20140126367 | NETWORK APPLIANCE THAT DETERMINES WHAT PROCESSOR TO SEND A FUTURE PACKET TO BASED ON A PREDICTED FUTURE ARRIVAL TIME - A network appliance includes a network processor and several processing units. Packets a flow pair are received onto the network appliance. Without performing deep packet inspection on any packet of the flow pair, the network processor analyzes the flows, estimates therefrom the application protocol used, and determines a predicted future time when the next packet will likely be received. The network processor determines to send the next packet to a selected one of the processing units based in part on the predicted future time. In some cases, the network processor causes a cache of the selected processing unit to be preloaded shortly before the predicted future time. When the next packet is actually received, the packet is directed to the selected processing unit. In this way, packets are directed to processing units within the network appliance based on predicted future packet arrival times without the use of deep packet inspection. | 05-08-2014 |
20140153571 | FLOW KEY LOOKUP INVOLVING MULTIPLE SIMULTANEOUS CAM OPERATIONS TO IDENTIFY HASH VALUES IN A HASH BUCKET - A flow key is determined from an incoming packet. Two hash values A and B are then generated from the flow key. Hash value A is an index into a hash table to identify a hash bucket. Multiple simultaneous CAM lookup operations are performed on fields of the bucket to determine which ones of the fields store hash value B. For each populated field there is a corresponding entry in a key table and in other tables. The key table entry corresponding to each field that stores hash value B is checked to determine if that key table entry stores the original flow key. When the key table entry that stores the original flow key is identified, then the corresponding entries in the other tables are determined to be a “lookup output information value”. This value indicates how the packet is to be handled/forwarded by the network appliance. | 06-05-2014 |
20150128113 | ALLOCATE INSTRUCTION AND API CALL THAT CONTAIN A SYBMOL FOR A NON-MEMORY RESOURCE - A novel allocate instruction and a novel API call are received onto a compiler. The allocate instruction includes a symbol that identifies a non-memory resource instance. The API call is a call to perform an operation on a non-memory resource instance, where the particular instance is indicated by the symbol in the API call. The compiler replaces the API call with a set of API instructions. A linker then allocates a value to be associated with the symbol, where the allocated value is one of a plurality of values, and where each value corresponds to a respective one of the non-memory resource instances. After allocation, the linker generates an amount of executable code, where the API instructions in the code: 1) are for using the allocated value to generate an address of a register in the appropriate non-memory resource instance, and 2) are for accessing the register. | 05-07-2015 |
20150128117 | LINKER THAT STATICALLY ALLOCATES NON-MEMORY RESOURCES AT LINK TIME - A novel linker statically allocates resource instances of a non-memory resource at link time. In one example, a novel declare instruction in source code declares a pool of resource instances, where the resource instances are instances of the non-memory resource. A novel allocate instruction is then used to instruct the linker to allocate a resource instance from the pool to be associated with a symbol. Thereafter the symbol is usable in the source code to refer to an instance of the non-memory resource. At link time the linker allocates an instance of the non-memory resource to the symbol and then replaces each instance of the symbol with an address of the non-memory resource instance, thereby generating executable code. Examples of instances of non-memory resources include ring circuits and event filter circuits. | 05-07-2015 |
20150128118 | HIERARCHICAL RESOURCE POOLS IN A LINKER - A novel declare instruction can be used in source code to declare a sub-pool of resource instances to be taken from the resource instances of a larger declared pool. Using such declare instructions, a hierarchy of pools and sub-pools can be declared. A novel allocate instruction can then be used in the source code to instruct a novel linker to make resource instance allocations from a desired pool or a desired sub-pool of the hierarchy. After compilation, the declare and allocate instructions appear in the object code. The linker uses the declare and allocate instructions in the object code to set up the hierarchy of pools and to make the indicated allocations of resource instances to symbols. After resource allocation, the linker replaces instances of a symbol in the object code with the address of the allocated resource instance, thereby generating executable code. | 05-07-2015 |
20150128119 | RESOURCE ALLOCATION WITH HIERARCHICAL SCOPE - A source code symbol can be declared to have a scope level indicative of a level in a hierarchy of scope levels, where the scope level indicates a circuit level or a sub-circuit level in the hierarchy. A novel instruction to the linker can define the symbol to be of a desired scope level. Location information indicates where different amounts of the object code are to be loaded into a system. A novel linker program uses the location information, along with the scope level information of the symbol, to uniquify instances of the symbol if necessary to resolve name collisions of symbols having the same scope. After the symbol uniquification step, the linker performs resource allocation. A resource instance is allocated to each symbol. The linker then replaces each instance of the symbol in the object code with the address of the allocated resource instance, thereby generating executable code. | 05-07-2015 |
20150220445 | TRANSACTIONAL MEMORY THAT PERFORMS A PROGRAMMABLE ADDRESS TRANSLATION IF A DAT BIT IN A TRANSACTIONAL MEMORY WRITE COMMAND IS SET - A transactional memory receives a command, where the command includes an address and a novel DAT (Do Address Translation) bit. If the DAT bit is set and if the transactional memory is enabled to do address translations and if the command is for an access (read or write) of a memory of the transactional memory, then the transactional memory performs an address translation operation on the address of the command. Parameters of the address translation are programmable and are set up before the command is received. In one configuration, certain bits of the incoming address are deleted, and other bits are shifted in bit position, and a base address is ORed in, and a padding bit is added, thereby generating the translated address. The resulting translated address is then used to access the memory of the transactional memory to carry out the command. | 08-06-2015 |
20150220446 | TRANSACTIONAL MEMORY THAT IS PROGRAMMABLE TO OUTPUT AN ALERT IF A PREDETERMINED MEMORY WRITE OCCURS - A transactional memory receives a command, where the command includes an address and a novel GAA (Generate Alert On Action) bit. If the GAA bit is set and if the transactional memory is enabled to generate alerts and if the command is a write into a memory of the transactional memory, then the transactional memory outputs an alert in accordance with preconfigured parameters. For example, the alert may be preconfigured to carry a value or key usable by the recipient of the alert to determine the reason for the alert. The alert may be set up to include the address of the memory location in the transactional memory that was written. The transactional memory may be set up to send the alert to a predetermined destination. The outputting of the alert may be a writing of information into a predetermined destination, or may be an outputting of an interrupt signal. | 08-06-2015 |
20150220449 | NETWORK INTERFACE DEVICE THAT MAPS HOST BUS WRITES OF CONFIGURATION INFORMATION FOR VIRTUAL NIDS INTO A SMALL TRANSACTIONAL MEMORY - A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. A virtual NID is configured by configuration information in an appropriate one of a set of smaller blocks in a high-speed memory on the NID. There is a smaller block for each virtual NID. A virtual machine on the host can configure its virtual NID by writing configuration information into a larger block in PCIe address space. Circuitry on the NID detects that the PCIe write is into address space occupied by the larger blocks. If the write is into this space, then address translation circuitry converts the PCIe address into a smaller address that maps to the appropriate one of the smaller blocks associated with the virtual NID to be configured. If the PCIe write is detected not to be an access of a larger block, then the NID does not perform the address translation. | 08-06-2015 |
20150222513 | NETWORK INTERFACE DEVICE THAT ALERTS A MONITORING PROCESSOR IF CONFIGURATION OF A VIRTUAL NID IS CHANGED - A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. For each virtual NID there is a block in a memory of a transactional memory on the NID. This block stores configuration information that configures the corresponding virtual NID. The NID also has a single managing processor that monitors configuration of the plurality of virtual NIDs. If there is a write into the memory space where the configuration information for the virtual NIDs is stored, then the transactional memory detects this write and in response sends an alert to the managing processor. The size and location of the memory space in the memory for which write alerts are to be generated is programmable. The content and destination of the alert is also programmable. | 08-06-2015 |
Patent application number | Description | Published |
20090054347 | KININ B1 RECEPTOR PEPTIDE AGONISTS AND USES THEREOF - The present invention provides for novel kinin B1 receptors peptide agonists of formula (I) having very good to excellent affinities and selectivity for the B | 02-26-2009 |
20120129774 | KININ B1 RECEPTOR PEPTIDE AGONISTS AND USES THEREOF - The present invention provides for novel kinin B | 05-24-2012 |
20140206622 | STABLE PEPTIDE-BASED PACE4 INHIBITORS - It is provided PACE4 inhibitors and their uses for treating infection, cancer. Particularly, it is provided a method or use for the treatment of a cancer in a subject, comprising administering to the subject a therapeutically effective amount of the PACE4 inhibitors or the composition disclosed. | 07-24-2014 |
20140256646 | MULTI-LEU PEPTIDES AND ANALOGUES THEREOF AS SELECTIVE PACE4 INHIBITORS AND EFFECTIVE ANTIPROLIFERATIVE AGENTS - Disclosed herein are PACE4 inhibitors, compositions comprising PACE4 inhibitors and their uses thereof for lowering PACE4 activity, reducing cell proliferation, reducing tumor growth, reducing metastasis formation, preventing and/or treating cancer. Also provided are methods for lowering PACE4 activity, reducing the proliferation of a cell, reducing tumor growth and/or treating and preventing cancer. Methods for screening PACE4 inhibitors and cell proliferation inhibitors are further provided. | 09-11-2014 |
20150141324 | STABLE PEPTIDE-BASED FURIN INHIBITORS - It is provided furin inhibitors and their uses for treating pathogen infection. Particularly, it is provided a method or use for the treatment of a pathogen infection, in a subject, comprising administering to the subject a therapeutically effective amount of the furin inhibitors or the composition disclosed, thereby preventing or treating pathogen infection, in the subject. | 05-21-2015 |