Patent application number | Description | Published |
20100309291 | METHOD AND APPARATUS FOR CAPTURING THREE-DIMENSIONAL STEREOSCOPIC IMAGES - A method for capturing a three-dimensional image. The method comprises capturing a combined beam of light having first polarized beam of light and a second polarized beam of light, sampling the combined beam of light using an imager, and providing the first polarized image to a first output and the second polarized image to a second output. The first polarized beam of light and the second polarized beam of light are orthogonally polarized. The imager includes a set of first polarized pixels for sampling the first polarized beam of light to produce a first polarized image, and a set of second polarized pixels for sampling the second polarized beam of light to produce a second polarized image. | 12-09-2010 |
20100321474 | METHOD AND APPARATUS FOR CAPTURING THREE-DIMENSIONAL STEREOSCOPIC IMAGES - A method for capturing a three-dimensional image is disclosed. The method comprises capturing a combined beam of light having a first polarized beam of light and a second polarized beam of light, sampling the combined beam of light using one or more imagers, and providing an electrical signal representing a mixed polarization image having a first polarized image and a second polarized image as a single output. The first polarized beam of light and the second polarized beam of light are orthogonally polarized. The one or more imagers include a set of first polarized pixels for sampling the first polarized beam of light to produce the first polarized image, and a set of second polarized pixels for sampling the second polarized light to produce the second polarized image. | 12-23-2010 |
20100321476 | CAMERA FOR CAPTURING THREE-DIMENSIONAL IMAGES - A camera for obtaining a three-dimensional image. The camera includes a lens module for capturing a beam of light, a filter module for polarizing the beam of light into a first polarized beam of light and a second polarized beam of light, a polarization array for generating a combined beam of light, one or more imagers for capturing the first polarized beam of light and the second polarized beam of light and an output module for processing and separating the mixed polarization image to produce the three-dimensional image. The imager further comprises one or more first polarized pixels for capturing the first polarized beam of light and one or more second polarized pixels for capturing the second polarized beam of light. | 12-23-2010 |
20100321777 | METHOD AND APPARATUS FOR OPTIMIZING STEREOSCOPIC EFFECT IN A CAMERA - A method for optimizing stereoscopic effect in a three-dimensional image is provided. The method for optimizing stereoscopic effect comprises capturing a first unpolarized beam of light representing a first image and a second unpolarized beam of light representing a second image using a lens module, converting the first and second unpolarized beam of light into the first polarized beam of light and the second polarized beam of light using a filter module and adjusting separation and convergence of the lens module using a lens control module for generating an output stream. | 12-23-2010 |
20140160255 | Single Camera Device And Method For 3D Video Imaging Using A Refracting Lens Array - An embodiment of the present invention may include an apparatus that captures 3D images having a lens barrel. The lens barrel may include a lens disposed at the first end of the lens barrel, an image capture element at the second end of the lens barrel, and a pair of refracting lenses positioned along the optical axis of the lens barrel. The first and second refracting lenses may be mounted to a first set and second set of positioning elements. The image capture element may capture images continuously at a predetermined frame rate, and the first and second set of positioning elements may continuously change the position of the first and second refracting lenses among a series of predetermined correlated positions based on the predetermined frame rate. | 06-12-2014 |
Patent application number | Description | Published |
20110314211 | RECOVER STORE DATA MERGING - Various embodiments of the present invention merge data in a cache memory. In one embodiment a set of store data is received from a processing core. A store merge command and a merge mask from are also received from the processing core. A portion of the store data to perform a merging operation thereon is identified based on the store merge command. A sub-portion of the portion of the store data to be merged with a corresponding set of data from a cache memory is identified based on the merge mask. The sub-portion is merged with the corresponding set of data from the cache memory. | 12-22-2011 |
20110314212 | MANAGING IN-LINE STORE THROUGHPUT REDUCTION - Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred. | 12-22-2011 |
20110320695 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 12-29-2011 |
20110320697 | DYNAMICALLY SUPPORTING VARIABLE CACHE ARRAY BUSY AND ACCESS TIMES - Various embodiments of the present invention manage access to a cache memory. In or more embodiments a request for a targeted interleave within a cache memory is received. The request is associated with an operation of a given type. The target is determined to be available. The request is granted in response to the determining that the target is available. A first interleave availability table associated with a first busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. A second interleave availability table associated with a second busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. | 12-29-2011 |
20130060997 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 03-07-2013 |
20130080705 | MANAGING IN-LINE STORE THROUGHPUT REDUCTION - Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred. | 03-28-2013 |
Patent application number | Description | Published |
20110314183 | SYSTEM AND METHOD FOR MANAGING DATAFLOW IN A TEMPORARY MEMORY - A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input. | 12-22-2011 |
20110320659 | DYNAMIC MULTI-LEVEL CACHE INCLUDING RESOURCE ACCESS FAIRNESS SCHEME - An apparatus for controlling access to a resource includes a shared pipeline configured to communicate with the resource, a plurality of command queues configured to form instructions for the shared pipeline and an arbiter coupled between the shared pipeline and the plurality of command queues configured to grant access to the shared pipeline to a one of the plurality of command queues based on a first priority scheme in a first operating mode. The apparatus also includes interface logic coupled to the arbiter and configured to determine that contention for access to the resource exists among the plurality of command queues and to cause the arbiter to grant access to the shared pipeline based on a second priority scheme in second operating mode. | 12-29-2011 |
20110320694 | CACHED LATENCY REDUCTION UTILIZING EARLY ACCESS TO A SHARED PIPELINE - A method of performing operations in a shared cache coupled to a first requestor and a second requestor includes receiving at the shared cache a first request from the second requester; assigning the request to a state machine; transmitting a first pipe pass request from the state machine to an arbiter; providing a first instruction from the first pipe pass request to a cache pipeline, the first instruction causing a first pipe pass; and providing a second pipe pass request to the arbiter before the first pipe pass is completed. | 12-29-2011 |
20110320722 | MANAGEMENT OF MULTIPURPOSE COMMAND QUEUES IN A MULTILEVEL CACHE HIERARCHY - An apparatus for controlling access to a pipeline includes a plurality of command queues including a first subset of the plurality of command queues being assigned processes the commands of first command type, a second subset of the plurality of command queues being assigned to process commands of the second command type, and a third subset of the plurality of the command queues not being assigned to either the first subset or the second subset. The apparatus also includes an input controller configured to receive requests having the first command type and the second command type and assign requests having the first command type to command queues in the first subset until all command queues in the first subset are filled and then assign requests having the first command type to command queues in the third subset. | 12-29-2011 |
20110320725 | DYNAMIC MODE TRANSITIONS FOR CACHE INSTRUCTIONS - A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter, selecting one of the plurality of requests as a selected request, the selected request having been provided by a first state machine, determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache, determining that the location in the cache is unavailable, and replacing the mode with a modified mode that only includes the second step. | 12-29-2011 |
20110320728 | PERFORMANCE OPTIMIZATION AND DYNAMIC RESOURCE RESERVATION FOR GUARANTEED COHERENCY UPDATES IN A MULTI-LEVEL CACHE HIERARCHY - A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit. | 12-29-2011 |
20110320731 | ON DEMAND ALLOCATION OF CACHE BUFFER SLOTS - Dynamic allocation of cache buffer slots includes receiving a request to perform an operation that requires a storage buffer slot, the storage buffer slot residing in a level of storage. The dynamic allocation of cache buffer slots also includes determining availability of the storage buffer slot for the cache index as specified by the request. Upon determining the storage buffer slot is not available, the dynamic allocation of cache buffer slots includes evicting data stored in the storage buffer slot, and reserving the storage buffer slot for data associated with the request. | 12-29-2011 |
20110320735 | DYNAMICALLY ALTERING A PIPELINE CONTROLLER MODE BASED ON RESOURCE AVAILABILITY - A mechanism for dynamically altering a request received at a hardware component is provided. The request is received at the hardware component, and the request includes a mode option. It is determined whether an action of the request requires an unavailable resource and it is determined whether the mode option is for the action requiring the unavailable resource. In response to the mode option being for the action requiring the unavailable resource, the action is automatically removed from the request. The request is passed for pipeline arbitration without the action requiring the unavailable resource. | 12-29-2011 |
20110320779 | PERFORMANCE MONITORING IN A SHARED PIPELINE - A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers. | 12-29-2011 |
20110320855 | ERROR DETECTION AND RECOVERY IN A SHARED PIPELINE - A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic. | 12-29-2011 |
20110320863 | DYNAMIC RE-ALLOCATION OF CACHE BUFFER SLOTS - Dynamic re-allocation of cache buffer slots includes moving data out of a reserved buffer slot upon detecting an error in the reserved buffer slot, creating a new buffer slot, and storing the data moved out of the reserved buffer slot in the new buffer slot. | 12-29-2011 |
20130061002 | PERFORMANCE OPTIMIZATION AND DYNAMIC RESOURCE RESERVATION FOR GUARANTEED COHERENCY UPDATES IN A MULTI-LEVEL CACHE HIERARCHY - A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit. | 03-07-2013 |
20130080708 | DYNAMIC MODE TRANSITIONS FOR CACHE INSTRUCTIONS - A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter, selecting one of the plurality of requests as a selected request, the selected request having been provided by a first state machine, determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache, determining that the location in the cache is unavailable, and replacing the mode with a modified mode that only includes the second step. | 03-28-2013 |
20130339606 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided and relates to associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the store operations into the platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the platform registers using the control information from the platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20130339607 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided. The computer product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the first and second store operation into the first and second platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the first and second platform registers using the control information from the first and second platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20130339701 | CROSS-PIPE SERIALIZATION FOR MULTI-PIPELINE PROCESSOR - Embodiments relate to cross-pipe serialization for a multi-pipeline computer processor. An aspect includes receiving, by a processor, the processor comprising a first pipeline, the first pipeline comprising a serialization pipeline, and a second pipeline, the second pipeline comprising a non-serialization pipeline, a request comprising a first subrequest for the first pipeline and a second subrequest for the second pipeline. Another aspect includes completing the first subrequest by the first pipeline. Another aspect includes, based on completing the first subrequest by the first pipeline, sending cross-pipe unlock signal from the first pipeline to the second pipeline. Yet another aspect includes, based on receiving the cross-pipe unlock signal by the second pipeline, completing the second subrequest by the second pipeline. | 12-19-2013 |
20140095836 | CROSS-PIPE SERIALIZATION FOR MULTI-PIPELINE PROCESSOR - Embodiments relate to cross-pipe serialization for a multi-pipeline computer processor. An aspect includes receiving, by a processor, the processor comprising a first pipeline, the first pipeline comprising a serialization pipeline, and a second pipeline, the second pipeline comprising a non-serialization pipeline, a request comprising a first subrequest for the first pipeline and a second subrequest for the second pipeline. Another aspect includes completing the first subrequest by the first pipeline. Another aspect includes, based on completing the first subrequest by the first pipeline, sending cross-pipe unlock signal from the first pipeline to the second pipeline. Yet another aspect includes, based on receiving the cross-pipe unlock signal by the second pipeline, completing the second subrequest by the second pipeline. | 04-03-2014 |
20140095839 | MONITORING PROCESSING TIME IN A SHARED PIPELINE - A pipelined processing device includes: a pipeline controller configured to receive at least one instruction associated with an operation from each of a plurality of subcontrollers, and input the at least one instruction into a pipeline; and a pipeline counter configured to receive an active time value from each of the plurality of subcontrollers, the active time value indicating at least a portion of a time taken to process the at least one instruction, the pipeline controller configured to route the active time value to a shared pipeline storage for performance analysis. | 04-03-2014 |
Patent application number | Description | Published |
20110320721 | DYNAMIC TRAILING EDGE LATENCY ABSORPTION FOR FETCH DATA FORWARDED FROM A SHARED DATA/CONTROL INTERFACE - A computer-implemented method for managing data transfer in a multi-level memory hierarchy that includes receiving a fetch request for allocation of data in a higher level memory, determining whether a data bus between the higher level memory and a lower level memory is available, bypassing an intervening memory between the higher level memory and the lower level memory when it is determined that the data bus is available, and transferring the requested data directly from the higher level memory to the lower level memory. | 12-29-2011 |
20110320736 | PREEMPTIVE IN-PIPELINE STORE COMPARE RESOLUTION - A computer-implemented method that includes receiving a plurality of stores in a store queue, via a processor, comparing a fetch request against the store queue to search for a target store having a same memory address as the fetch request, determining whether the target store is ahead of the fetch request in a same pipeline, and processing the fetch request when it is determined that the target store is ahead of the fetch request. | 12-29-2011 |
20110320744 | DIAGNOSTIC DATA COLLECTION AND STORAGE PUT-AWAY STATION IN A MULTIPROCESSOR SYSTEM - A computer-implemented method for collecting diagnostic data within a multiprocessor system that includes capturing diagnostic data via a plurality of collection points disposed at a source location within the multiprocessor system, routing the captured diagnostic data to a data collection station at the source location, providing a plurality of buffers within the data collection station, and temporarily storing the captured diagnostic data on at least one of the plurality of buffers, and transferring the captured diagnostic data to a target storage location on a same chip as the source location or another storage location on a same node. | 12-29-2011 |
20110321053 | MULTIPLE LEVEL LINKED LRU PRIORITY - A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource. | 12-29-2011 |
20120215995 | PREEMPTIVE IN-PIPELINE STORE COMPARE RESOLUTION - A computer-implemented method that includes receiving a plurality of stores in a store queue, via a processor, comparing a fetch request against the store queue to search for a target store having a same memory address as the fetch request, determining whether the target store is ahead of the fetch request in a same pipeline, and processing the fetch request when it is determined that the target store is ahead of the fetch request. | 08-23-2012 |
Patent application number | Description | Published |
20110117534 | EDUCATION MONITORING - Group-based, periodic education intervention that provides a targeted curriculum selected specifically for each period based on current skill assessment data is described. For example, candidates' skill levels in multiple skills are assessed, and groups are formed based on commonality of skill level. A period-specific curriculum is generated for each group to address the specific needs of the individuals of the respective group. After delivery of the period-specific targeted curriculum over the period, re-assessments of the current skill of the group members are made, and a period-specific curriculum for the subsequent period is generated and delivered. Fidelity of an implementation of the curriculum is analyzed, and alerts, reminders, and reports are provided to improve fidelity of an implementation of the curriculum. | 05-19-2011 |
20140134591 | EDUCATION MONITORING - Group-based, periodic education intervention that provides a targeted curriculum selected specifically for each period based on current skill assessment data is described. For example, candidates' skill levels in multiple skills are assessed, and groups are formed based on commonality of skill level. A period-specific curriculum is generated for each group to address the specific needs of the individuals of the respective group. After delivery of the period-specific targeted curriculum over the period, re-assessments of the current skill of the group members are made, and a period-specific curriculum for the subsequent period is generated and delivered. Fidelity of an implementation of the curriculum is analyzed, and alerts, reminders, and reports are provided to improve fidelity of an implementation of the curriculum. | 05-15-2014 |
20140134592 | System and Method For Real-Time Observation Assessment - Techniques for real-time observation assessment are provided. The techniques, which are designed for educators, take advantage of handheld computers, desktop/laptop computers and Internet access in order to reduce the paperwork associated with conventional educational assessments. An array of instructional assessment applications are designed to run on handheld computers. The instructional assessment applications may be based on existing and widely used paper methodologies. A common Web-based platform for assessment application distribution, selection, download, data management and reporting is also provided. Users can then periodically synchronize instructional data (assessments, diagnostic results, notes and/or schedules) to the Web site. At the Web site, browser-based reports and analysis can be viewed, administered and shared via electronic mail. | 05-15-2014 |
20150221227 | EDUCATION MONITORING - Group-based, periodic education intervention that provides a targeted curriculum selected specifically for each period based on current skill assessment data is described. For example, candidates' skill levels in multiple skills are assessed, and groups are formed based on commonality of skill level. A period-specific curriculum is generated for each group to address the specific needs of the individuals of the respective group. After delivery of the period-specific targeted curriculum over the period, re-assessments of the current skill of the group members are made, and a period-specific curriculum for the subsequent period is generated and delivered. Fidelity of an implementation of the curriculum is analyzed, and alerts, reminders, and reports are provided to improve fidelity of an implementation of the curriculum. | 08-06-2015 |
Patent application number | Description | Published |
20100126239 | Conical shank anti-ligature releasable door lever - A lever latch lock assembly including:
| 05-27-2010 |
20110068927 | OVER-THE-DOOR PRESSURE SENSOR ANTI-LIGATURE AND ALARM SYSTEM - Apparatus for counteracting a suicide attempt of a person trying to hang himself from a cord extended over the top edge of a door which door is mounted in a door frame in a room, where the door has a top edge facing upward, a latching edge, an inside surface facing the interior of the room with an upper portion thereof generally adjacent the top edge, an opposite outside surface, and a latch assembly with a latch bolt extendible outward of the latching edge, and the frame includes a strike plate for cooperation with the latch bolt, the apparatus including:
| 03-24-2011 |
20120167644 | DELAYED EGRESS PADDLE ALARM DOOR LOCK - A delayed egress paddle door lock system operable with a door that is pivotally mounted in a door frame, including
| 07-05-2012 |
20130270844 | DOOR LEVER ASSEMBLY - A lever and mounting plate combination attachable to the exposed vertical surface of a door that includes a latch assembly with a spindle extending horizontally outward from the vertical surface, the lever and mounting plate combination including a main plate mountable onto the door's exposed surface and having a window-like opening extending completely through the plate, a lever having a base part which is coupleable to the spindle and pivotable therewith and has an exposed outer part, and a lever plate fixed to the base part of the lever and pivotable with though lever and with the spindle when it is coupled to the lever, the lever plate closely underlies and contacts said inner surface of the main plate while at all times covering the window, the lever being engagable by a user to rotate the the lever plate and simultaneously rotate the spindle to open the door. | 10-17-2013 |
20140311194 | DOOR LEVER & KEY CYLINDER LOCK COMBINATION - A door lever and mounting plate assembly attachable to a door and coupleable to a latch and spindle assembly in the door, including
| 10-23-2014 |
20140319850 | MAGNETIC DOOR LOCK ASSEMBLY - A latch lock system for a door that is mountable in a doorframe, the latch lock system including a main housing including a spindle and lever, a bolt having a nose part that is attractable to a magnet, the bolt being slidable in the main housing between extended and retracted positions, a spring biasing the bolt to its retracted position, a secondary housing including a strike plate in the doorframe, a magnet mounted in the secondary housing, the magnet having a magnetic force greater than the spring force which will pull the bolt to its extended position when the door is in its closed position and the bolt is aligned with the magnet. | 10-30-2014 |
Patent application number | Description | Published |
20100171037 | COMPACT SCANNING ELECTRON MICROSCOPE - A compact electron microscope uses a removable sample holder having walls that form a part of the vacuum region in which the sample resides. By using the removable sample holder to contain the vacuum, the volume of air requiring evacuation before imaging is greatly reduced and the microscope can be evacuated rapidly. In a preferred embodiment, a sliding vacuum seal allows the sample holder to be positioned under the electron column, and the sample holder is first passed under a vacuum buffer to remove air in the sample holder. | 07-08-2010 |
20100194874 | User Interface for an Electron Microscope - A user interface for operation of a scanning electron microscope device that combines lower magnification reference images and higher magnification images on the same screen to make it easier for a user who is not used to the high magnification of electron microscopes to readily determine where on the sample an image is being obtained and to understand the relationship between that image and the rest of the sample. Additionally, other screens, such as, for example, an archive screen and a settings screen allow the user to compare saved images and adjust the settings of the system, respectively. | 08-05-2010 |
20100230590 | Compact Scanning Electron Microscope - A compact electron microscope is robust, simple to operate, and preferably requires no special utilities. Imaging can begin shortly after a sample is inserted. A preferred simplified design includes permanent magnets for focusing, lack a vacuum controller and vacuum gauge, and uses a backscattered electron detector and no secondary electron detector. | 09-16-2010 |
20100314551 | In-line Fluid Treatment by UV Radiation - A UV source is regulated according to one or more purification parameters to produce a desired germicidal effect in a liquid while minimizing wasted power. | 12-16-2010 |
20110133083 | COMPACT SCANNING ELECTRON MICROSCOPE - A compact electron microscope uses a removable sample holder having walls that form a part of the vacuum region in which the sample resides. By using the removable sample holder to contain the vacuum, the volume of air requiring evacuation before imaging is greatly reduced and the microscope can be evacuated rapidly. In a preferred embodiment, a sliding vacuum seal allows the sample holder to be positioned under the electron column, and the sample holder is first passed under a vacuum buffer to remove air in the sample holder. | 06-09-2011 |