Publications

Impact Private Practice Section of the American Physical Therapy Association – May 1, 2014

Many physical therapy private practices have already spent time switching from ICD-9 code sets to ICD-10 code sets, which are used to report medical diagnoses and inpatient procedures. Even though the deadline has been extended to October 2015, clinics should be financially prepared to deal with the change since American Medical News has reported that the average small practice could expect to spend $83,000 on the changeover. Most practices are challenged to understand where such costs would come from, but looking deeper into the information, therapists have realized the change to ICD-10 is more complex than first thought.

Impact, Private Practice Section of the American Physical Therapy Association · Feb 2, 2013

The authors discuss a decision-making process that avoids a pros-and-cons approach and takes into account personal priorities and ambitions.

 

Impact, Private Practice Section of the American Physical Therapy Association · April, 2012.

Future patient documentation systems will be faster, easier to use, cheaper, and better integrated in your entire practice. Current documentation systems that require reading are neither effective nor efficient, because reading is slow and unintuitive. Future physical therapy documentation systems will enable note completion and billing in seconds while the patient is leaving the office; but reaching this level will require consideration of accessibility of information, integration with practice workflow, and accountability.

Impact magazine, Private Practice Section, APTA · Jul 1, 2011

Compensation is critical for attracting, keeping, and motivating the right staff. if you structure it correctly, you will pay less and get more results. But if you structure it incorrectly, you may impair all three key aspects of your practice, including patient flow, cash flow, and compliance.

Impact magazine, Private Practice Section, APTA · Apr 1, 2011

A real-life husband-and-wife conversation takes us through an analysis of how an effective EMR system can improve your income and save you time—and maybe your marriage!

Impact magazine, Private Practice Section, APTA · Apr 1, 2009

There will be increasing pressure over the next few years for private practice owners to implement electronic medical records (EMR) Consider two unique features, Software as a Service and point-and-click technology, to attain the most effective solution.

Affinity Billing · Jan 1, 2009

Is your practice profitable and compliant? Some state-managed insurance company audits have discovered up to 30% in mishandled and wrongly denied claims. This book shows private practitioners how to automate and enable such insurance claim payment audits in real time. You will discover how to:

  • Improve your cash flow and reduce audit risk using Straight-Through Processing (STP) technologies
  • Design and implement industrial-grade network standards for continuous billing quality improvement and compliance maintenance
  • Reduce operations risk and improve your balance sheet using Software as a Service (SaaS) and outsourcing.

Understand the rules of the modern “payer-provider conflict” and learn to apply and manage winning Internet strategies, such as the “billing network effect.” Gain further insight from guest chapters by Dennis Alessi, Esq., of Mandelbaum, Salsburg, Gold, Lazris & Discenza, PC; Eric Fishman, MD, of EHR Scope; Joe Kamelgard, MD, of Billing Integration, LLC; and Erum Raza, Esq, of Fox Rothschild, LLP.

Impact magazine, Private Practice Section, APTA · Jan 1, 2009

Information technology (IT) is necessary for two reasons: to reduce the complexity of doing business and to increase profitability. It is supposed to allow entrepreneurs and business owners to automate and improve repetitive processes that take time away from building their business; it is not meant to add complexity. IT for the physical therapy private practice must meet five functional criteria and five technical criteria.

Affinity Billing · Sep 1, 2007

Practicing Profitability touches on every aspect of modern office management software-including workflow, reporting, outsourcing, scheduling, EMR, SOAP notes, care plans, coding, billing, collections, HIPAA compliance, and audit risk management. It shows simple steps that practice owners must take to increase practice revenue without wasting time, energy, and money on personnel, software, hardware, or any other resources that dilute their focus from patient care and practice development. The book spans thirty-five chapters and about two hundred pages, and it contains informative illustrations and an extensive index. It’s aimed at practice owners, coaches, owners of billing companies, practice managers, office management consultants, billing specialists, and recent graduates of medical schools and chiropractic colleges.

Practicing Profitability is the first book to systematically approach billing from the “payer-provider conflict” perspective and to apply the “network effect.” The network effect is the most revolutionary characteristic of Internet technology. In short, it’s when the value of a networked service to a customer increases in step with the growing number of customers. It applies to services like Google AdSense, eBay, Wikipedia, Skype, Amazon, Flickr, and MySpace-and it can be used by healthcare practice owners and managers to “level the playing field” with insurance companies.

Today’s Chiropractic LifeStyle · Jan 1, 2007

A study, published by LifeStyle – Today’s Chiropractic in its January 2007 issue, shows an unprecedented growth of post-payment audits by insurance companies in terms of both audit frequency and judgments. The surge in audits is motivated by a combination of three factors, namely, continued pressure to turn higher profit by insurance companies, inability to raise insurance premiums, and timely payment laws. To meet profit expectations and still play within the new rules, insurers have decided to go after the reimbursements after they are paid.

EP Lab Digest · Apr 28, 2006

St. Francis Medical Center is a busy acute care teaching hospital that has consistently been recognized as top ranking for the outcome of cardiac surgery procedures and for the lowest mortality in the state. Dr. Glenn Laub, who is the Chairman of the Cardiothoracic Surgery Department at St. Francis, recently saw a 19% increase in his department’s monthly collections. His department was able to achieve this with the help of Affinity Billing, a third-party billing service which consolidates billing services, tracks payer performance from a single point of control, shares Medicare compliance rules globally and much more.

JOURNAL OF HEALTHCARE INFORMATION MANAGEMENT - HIMSS - Healthcare Information Management Systems Society · Jun 23, 2000

The total investment of the more than fifty e-commerce startups that entered healthcare supply chain management in the past three years has surpassed $500 million. However, none of these early entrants has delivered on the initial promise of restructuring the entire supply chain, replacing the traditional intermediaries, or at least achieving substantial revenue. This article offers a new business-to-business (B2B) e-commerce solution classification paradigm and uses it to analyze the functional requirements for an effective and efficient healthcare supply chain marketplace. The analysis exposes several fundamental B2B market complexities that prevent the early entrants from creating a solid customer base and reaching desired liquidity goals. It also identifies several technological solutions to the problems mentioned. These new technologies create a comprehensive and symmetric order-matching engine that is capable of aggregating buy orders, requesting quotes from multiple vendors simultaneously, and negotiating along multiple criteria.

SysAdmin, August 1998, Vol. 7 Issue 8 · Aug 1, 1998

Hybrid systems, desktops that combine UNIX and Windows NT, offer better systems performance and significant flexibility but usually cost more because of increased systems management complexity. However, gaining the advantages of multiple platforms without increasing costs is possible using uniform systems management. Centaur, a combined UNIX/NT workstation, uniformly administered under Atlas, is a working prototype for this new approach. It is currently deployed in production to 105 users spanning trading, sales, research, development, and support at a Wall Street firm. The system approach and the Atlas management techniques are presented here as potential guides to others implementing similar architectures.

Hybrid systems enable coexistence of applications on different platforms prior to their migration to the new platform, but they also introduce a new complexity aspect to system support. Increased complexity usually results in lower systems reliability and increased support costs. Specifically, SA productivity ratio can drop from 200 UNIX hosts/SA to 85-100 hybrid Desktops/SA, raising the systems support costs by a factor of 2. Moreover, as new desktops are brought to the users without taking away the old desktops, the rising number of networked computers creates network congestion problems in terms of both connection ports and network traffic. Finally, the hybrid systems expose multiple new problems at the application level, including keyboard mapping problems, divergent behavior under different X servers, different fonts, and centralized file system service synchronization

Mathematical and Computer Modelling - MATH COMPUT MODELLING, vol. 28, no. 2, pp. 45-63, 1998 · Feb 2, 1998

Distributed computing increases compute power but complicates support processes and raises their costs. Traditional “divide and conquer” approaches to reducing support complexity try to separate support processes by time, function, and structure. The resulting support processes tend to be exorbitantly expensive and not responsive to dynamic user support requirements.A new synergistic support methodology dramatically improves both client satisfaction and support productivity. Atlas is an implementation of this methodology. It expedites navigation through the typical maze of enterprise-wide system components and provides both crisis management and comprehensive performance history for any host, database, or batch process. Atlas uniquely integrates time, function, and structural aspects of support processes; provides a platform-independent and ubiquitous access to systems information through its Java implementation; makes outages, systems’ shortcomings, and support resources visible to everybody; and pulls the right resources together to fix the problems.In this paper, we first outline the distributed systems management problem domain and our methodology for comprehensive systems, database, and batch administration. Next, we describe the benefits of an automated suite of tools that support it. Finally, we enumerate standard systems management features, currently available in Atlas.

Prentice Hall · Apr 28, 1997

Complete guide to managing mission-critical distributed systems. Learn how to improve performance, increase availability, and reduce costs.

Database Programming & Design · Feb 3, 1996

Database technology is evolving toward reduced support costs and improved performance. Users and application developers will be the ultimate beneficiaries of the new technology, which will achieve levels of standardization and personalization that empower Internet users to create and maintain their own shared database resources for home or office.

According to Moore’s Law, the capacity of a computer chip doubles every 18 months. Assuming this law holds for the next 10 years (as it has for the previous 20), database searches or indexing that take 12 hours today will take less than four minutes in 2007. Cheap holographic memory will hold terabytes of data in less than a cubic centimeter of space. And as Bill Gates predicts, a single wire will deliver mountains of digital data to every home.

Such progress will profoundly change database technology in terms of data content, database administration, and the DBA profession. Data content will evolve from records of text and numbers to temporal sequences of images, voice, and text (movies, that is). Data access methods will also evolve beyond traditional tables of data toward concepts such as the Informix DataBlades, enabling direct manipulation of complex data objects. Finally, many traditional DBA tasks requiring skill and experience today-such as replication and data integrity verification-will be automated.

Database administration is a relatively new concept. In the late ’70s, client/server technology introduced the idea of the systems administrator, which combines the knowledge of systems internals with the access permissions of the systems operator. The DBA concept originated with that of the shared database in the ’80s; it combines knowledge about application requirements with the ability to arrange the data so as to optimize response time.

Database Programming & Design, San Francisco, V.8,n.7 (July 1995), p.32-35,38 · Jul 1, 1995

Compensation is critical for attracting, keeping, and motivating the right staff. if you structure it correctly, you will pay less and get more results. But if you structure it incorrectly, you may impair all three key aspects of your practice, including patient flow, cash flow, and compliance.

Low Cost, High Performance

Database Programming & Design 8, 3; 42 · Mar 3, 1995

There is a good article in the March issue of Database Programming & Design called “Low Cost, High Performance” by Aaron Goldberg, Joyce Lee, Yuval Lirov and Maxwell Riggsbee about an organization’s decision process to move tempdb to a file system device. It’s very good (IMHO) and gives insight about the pros and cons and alternatives they considered. You should note, however, that the article addressed the fact that they do not consider regular files as a viable alternative for transaction logs.

https://groups.google.com/forum/#!topic/comp.databases.sybase/UaKXhw8iG7E

SysAdmin, May/June 1995, Vol. 4 Issue 3 · Mar 2, 1995

A full or partial building powerdown destroys an orderly universe in the data center and stimulates hardware and software failures. Unfortunately, building power outages are a necessary evil. On one hand, computer systems require conditioned and uninterruptible power sources. On the other hand, those very power sources require periodic testing and maintenance. Thus, a series of scheduled outages is the price paid for avoiding random power interruptions. On the average we experience one scheduled powerdown per month.

In a distributed computing environment, the difficulties of coping with powerdowns are compounded. The distributed environment for a large organization may comprise hundreds of systems in the data center and thousands of systems on desktops. These systems may be administered by a handful of people, each of whom may be responsible for anywhere from 50 to 150 machines. In such an environment powerdown recovery requires scalable procedures. Other factors that may affect recovery time include time constraints, imperfect configuration management data, chaotic powerup sequence, stimulated disk failures, and disguised network outages.

SPIE · Dec 1, 1993SPIE · Dec 1, 1993

Increasing complexity of systems requires improved support capabilities. Automation controls the support costs while meeting the growing demands at the same time. Proteus is a firm-wide proactive problem management system with automated advisory capabilities. Proteus non-obtrusively accumulates troubleshooting expertise and quickly recycles it by combining case-based reasoning with text retrieval and fuzzy logic pattern matching. It has linear online and sub-quadratic preprocessing computational time complexities.

IEEE International Conference on Fuzzy Systems, 1993 · Feb 3, 1993

Automated distributed systems management is important to trading floor support because it improves service quality while reducing support costs. Systems management includes two key functions: first, it monitors the network to identify deficient service, and second, it diagnoses the monitored data to ensure the timely and least costly resolution of an identified deficiency. These tasks are difficult to achieve concurrently because each is bound by an inherent conflicting constraint imposed by the network. The conflict was settled by developing a real-time automated alarm interpretation system that uses fuzzy message clustering and averaging techniques acting in concert with a network monitoring platform. In particular, an intelligent network management facility was implemented integrating both network monitoring and diagnostic tasks. The architecture of the intelligent diagnostic system is described. The fuzzy knowledge base design is presented. An operation example is discussed. The system reduces the operator’s message management workload by half. The new and improved computer-generated advice replaces conventional systems monitoring alarms.

Neural Networks, Volume 5, Issue 4, July–August 1992, Pages 711–719, Elsevier · Jul 1, 1992

The lack of rigorous techniques to initiate and complete in detail a neural network design renders neural network engineering to be more of an art than science. Dimensioning of neural network layers is an example of an important practical design detail typically overlooked in the neural network theoretical investigations and missed by the practitioners. We develop a computer aided neural network engineering tool based on a hybrid expert system architecture merging both knowledge-based and neurode-based components. We demonstrate our approach by mechanizing the design of counter-propagation neural network. Our automatic Kohonen layer configurator combines A* and simulated annealing search techniques to achieve both automated dimensioning and simultaneous selection of synaptic weights.

Industrial and Engineering Applications of Artificial Intelligence and Expert Systems - IEA/AIE , pp. 1-14, 1992 · Feb 3, 1992

Synergy of communications technology with artificial intelligence results in increased distribution, specialization, and generalization of knowledge. Generalization continues migration of the technology to the main-stream programming practice. Distribution further personalizes automated decisionmaking support. Specialization scales up domain expertise by architecting in real-time a series of limited-domain expert systems that work in concert. These important trends resolve a list of current shells inefficiencies revealed via analysis of a distributed systems management platform.

Journal of Intelligent and Robotic Systems - JIRS , vol. 4, no. 4, pp. 303-319, 1991 · Feb 3, 1992

Our survey of some 40 network maintenance expert systems reveals theri main shortcoming, which is the difficulty to acquire troubleshooting knowledge both when initializing the expert system and after its deployment. Additionally, the state-of-the-art troubleshooting expert systems do not optimize troubleshooting cost. We present theAO* algorithm to generate a network troubleshooting expert system which minimizes the expected troubleshooting cost and learns better troubleshooting techniques during its operation.

Mathematical and Computer Modelling - MATH COMPUT MODELLING , vol. 16, no. 1, pp. 107-125, 1992 · Jan 17, 1992

Stock Cutting Problem (CSP) is an instance of a particularly difficult combinatorial optimization problem where a few geometrical patterns must be selected and arranged so as to minimize the total cost of the underlying process. A survey of the literature on CSP reveals that the scope of this famous problem applications has expanded in the recent thirty-five years from pallet loading, packing, and industrial production planning to computer operations to telecommunications. Three major trends of engineering activities are identified and discussed in detail: integration of applications, shift from optimization to control, and construction of new related problems.

On the other hand, the notorious difficulties of the popular Linear Programming approach to solving CSP (e.g., handling non-linear constraints and integer solutions) have been only partially remedied by a host of heuristic search schemes. We propose a learning expert system CUT addressing the above difficulties. CUT achieves efficient cutting pattern knowledge acquisition and inference by combining Simulated Annealing and Case Based Reasoning. CUT is implemented in PROLOG. Logic Programming implementation offers the advantages of natural symbolic data processing, declarative programming, and automatic search.

ANNALS OF OPERATIONS RESEARCH Volume 39, Number 1, 137-155, Springer · Jan 1, 1992

Queueing network capacity planning can become algorithmically intractable for moderately large networks. It is, therefore, a promising application area for expert systems. However, a survey of the published literature reveals a paucity of integrated systems combining design and optimization of network-based problems. We present a distributed expert system for network capacity planning, which uses Monte Carlo simulation-based optimization methodology for queueing networks. Our architecture admits parallel simulation of multiple configurations. A knowledge-based search drives the performance optimization of the network. The search process is a randomized combination of steepest descent and branch and bound algorithms, where the generating function of new states uses qualitative reasoning, and the gradient of the objective function is estimated using a heuristic score function method. We found a random search based on the relative order of the performance gradient components to be a powerful qualitative reasoning technique. The system is implemented as a loosely coupled expert system with components written inProlog, Simscript and C. We demonstrate the efficacy of our approach through an example from the domain of Jackson queueing networks.

Expert Systems Volume 8, Issue 3, pages 171–182, Wiley · Aug 1, 1991

Solving the customer’s LAN/WAN interconnect problem is difficult because of the need to explore many possible configurations (e.g. bridging/routing, packet/channels) and then to choose the best configuration using economic, performance and other criteria. The rapid introduction of new standards, protocols and products to the networking field brings additional complications to the solution and can cause confusion when configuring a network. ALCA is intended to generate all feasible LAN/WAN configuration possibilities automatically and to pick the most appropriate solution to solve the customer’s problem, while specifically addressing open systems interconnection (OSI) standards. Matching communications protocols while searching all possible configurations is notoriously slow even on a computer. We show how the search speed can be significantly improved by using expert system knowledge compilation, a computer-aided software engineering (CASE) technique. ALCA is based on a centrally updated knowledge base of various local area networking products and their interconnect possibilities. ALCA also allows querying to find out protocol interfaces supported by a particular product/service. Finally, it includes a graphic user interface and context-sensitive menus to reduce user information load. ALCA is intended to be used by the field personnel involved in pre-sales support, by the data communication product managers, and as an educational tool for novice communication product managers.

Decision Support Systems - DSS , vol. 7, no. 2, pp. 159-167, 1991 · Feb 3, 1991

International Symposium on Neural Networks - ISNN , 1991 · Feb 3, 1991

The absence of automated tools in the area of automated neural network design can be explained by the corresponding paucity of rigorous neural network composition techniques. The author suggests a hybrid architecture as the basis for a computer-aided neural network engineering tool. Such a tool is expected to complete automatically the minute yet important neural network architecture details. The author demonstrates the approach by developing an automatic counterpropagation neural network design module. It includes a mechanized Kohonen layer configurator, which combines A* and simulated annealing search techniques to achieve both automated dimensioning of the layer and simultaneous selection of its weights.

Expert Systems with Applications, Volume 2, Issues 2–3, 1991, Pages 219–228, Elsevier · Jan 1, 1991

The area of telecommunications network design and management is complex and solutions can become algorithmically intractable for moderately large networks. It is, therefore, a promising applications area for expert systems; however, a survey of the published literature reveals a paucity of integrated systems combining design and optimization of network-based problems. We present a distributed expert telecommunications provisioning system which uses a simulation-based optimization methodology for queueing networks. Our architecture admits parallel simulation of multiple configurations. A knowledge-based search drives our performance optimization of the network. The search process is a randomized combination of Steepest Descent and Branch and Bound algorithms, where the generating function of new states uses qualitative reasoning, and the gradient of the objective function is estimated using a heuristic Score Function method. We found a random search based on the relative order of the performance gradient components to be a powerful qualitative reasoning technique. The system (P3) is implemented as a loosely coupled expert system with components written in PROLOG, SIMSCRIPT, and C. We demonstrate the efficacy of our method through an example from the domain of Jackson queueing networks.

APPLIED INTELLIGENCE Volume 1, Number 2, 121-132, Springer · Jan 1, 1991

Troubleshooting knowledge acquisition is a notorious network maintenance expert systems development bottleneck. We present an improved methodology to generate automatically a skeleton of network troubleshooting knowledge base given the data about network topology, test costs, and network component failure likelihood. Our methodology uses AO * search where a suitable modification of the Huffman code procedure is found to be an admissible heuristic. Our heuristic uses synergistically information about both component failure rates and test costs while relaxing topology constraints. The resulting expert system (XTAR) minimizes expected troubleshooting cost faster and learns better troubleshooting techniques during its operation.

Expert Systems With Applications - ESWA , vol. 2, no. 4, pp. 333-343, 1991 · Jan 1, 1991

The problem of automatically reprogramming an expert system either to adjust for solving different types of computational problems or to improve run-time performance is addressed. A computer-aided software engineering (CASE) method using artificial intelligence techniques to instantiate a concrete program from a given abstract expert system architecture is proposed. Our method, called expert system reification, is suitable for use in a large range of problem-solving behaviors. Reification combines meta-level programming with partial evaluation of the program. Expert system reification extends the above blend with a knowledge-based translation module, thus obtaining an efficient and portable expert system version. We demonstrate the method by applying it to STAREX, an electronic circuit pack troubleshooting expert system developed in PROLOG to derive automatically the corresponding C code, which is currently installed at one of the AT&T manufacturing facilities.

Computers & Mathematics With Applications - COMPUT MATH APPL , vol. 20, no. 9-10, pp. 141-179, 1990 · Feb 3, 1990

A bibliography on applications of logic programming in decisions and control is presented. The bibliography contains 330 entries, an authors index, a classification by 18 subjects, and a classification of descriptors. The entire bibliography is created by first downloading the entries from several Dialog files, then transforming the obtained material into a set of Prolog clauses, and, finally, by operating a few second-order Prolog predicates on the aforementioned set.

Online Information Review, Volume 14 issue 1, pp 3-12, Emerald · Jan 1, 1990

In this paper we propose a new method for the creation of subject bibliographies. Our method consists of two phases: first, the raw bibliographical material is downloaded from an online bibliographical database (e.g. DIALOG), and then this material is processed using knowledge-based means. We apply a meta-programming approach in which the raw bibliographic material is viewed as a logic program upon which we develop a second-order logic program. The second-order program creates the subject bibliography by operating a rule base and the first-order logic program. The entire system, named REX, was written in PROLOG and used to create automatically a subject bibliography on Applications of Logic Programming in Decision and Control.

Computers & Mathematics With Applications - COMPUT MATH APPL , vol. 18, no. 1-3, pp. 151-160, 1989 · Feb 3, 1989

In this paper we propose a self learning approach which utilizes artificial intelligence methods in air combat. It is based on the generation and evaluation of situations and on the subsequent construction of optimal missions. The choice of the evaluation function depends on the partner’s and the opponent’s priorities which are not necessarily known in advance. A learning algorithm is proposed in order to determine these unknown priorities. Furthermore an algorithm is proposed which enables one to decide if the learnt information is sufficient to win. Implementation of these algorithms and their similarities with well known pattern recognition algorithms are outlined. The use of learning algorithms in games involving competing groups of cooperating players (air combats in particular) is new.

IEEE International Conference on Control and Applications, Proceedings - ICCON , 1989 · Feb 3, 1989

Chemical etching is sensitive to a number of process variables which need to be maintained in a certain relationship to achieve good quality product. We describe M intelligent control system to support chemical etch processes. The system serves simultaneously two purposes: first it dispenses on-line expert advice on how to control chemical etching and, secondly, it constitutes a repository of qualitative knowledge and thus serves to codify the folklore of &aline etch control. The expert system has been prototyped using PROLOG and installed on the factory floor.

IEEE International Conference on Systems, Man, and Cybernetics - SMC , 1989 · Feb 3, 1989

Intelligent decision systems in time-invariant situations must be able to manipulate time-variant data and functions. The authors present an algorithm for the following dynamic computational problem: given a set of continuous functions, keep track efficiently of their order in their domain. The algorithm has time complexity of O(nlog2 n+K min{log2n, log2 K}), where n is the number of functions and K is the total number of intersections between the functions. The solution provides for a convenient object-oriented view of the time-variant priority queue. Contrary to the abstract data type consisting of the array (data structure) and th

Engineering Applications of Artificial Intelligence - ENG APPL ARTIF INTELL , vol. 2, no. 1, pp. 3-18, 1989 · Feb 3, 1989

Circuit pack troubleshooting is one of the high-impact processes aimed to increase the quality of manufacturing of electronic systems. We describe a methodology which allows for smooth integration of shallow and deep knowledge (e.g., rules and device topology) as well as for amalgamation of different troubleshooting strategies (e.g., “test” and “replace”). An expert system for troubleshooting a 200-component hybrid circuit pack is constructed using the STAREX approach and installed at one of AT&T’s factories. The prototype expert system in PROLOG consists of eighteen meta-rules and an automatic run-time rule generator. The automatically translated production system in either C5 or C currently uses 11000 rules and has an estimated 80% fault coverage. Our methodology uses a variety of techniques from object-oriented and logic programming paradigms. The concept of malfunctioning hierarchy is extended to obtain a uniform representation of variable diagnostic tree objects with hierarchical run-time binding capabilities in both parameters and topology. The counterpart concept, that of a variable inference engine is proposed and utilized to obtain an object-oriented uniform diagnosis and repair circuit pack knowledge representation.

Proceedings of the 27th IEEE Conference on Decision and Control, 1988., pp. 1818 - 1822 vol.3 · Dec 7, 1988

The authors propose an artificial-intelligence-based design procedure which addresses the following robust adaptive stabilization problem: given the inputs and outputs of an unknown time-varying system, construct a family of plants one of which is the true system, and obtain a simultaneous stabilizer for the above family. The procedure consists of replacing the classical plant/compensator control loop by a higher level control system consisting of three collaborating expert systems. The procedure is robust, since it utilizes a simultaneous design scheme with a simultaneous identifier. From the point of view of computation, a heuristic scheme is designed, and an admissible heuristic is constructed even for a tree with time-varying cost functions

Journal of Guidance Control and Dynamics - J GUID CONTROL DYNAM , vol. 11, no. 5, pp. 425-429, 1988 · Nov 3, 1988.

Metadiagnosis 

IEEE International Symposium on Intelligent Control – ISIC , 1988 · Feb 3, 1988

The need for a computerized diagnostic strategy choice aiding tool is identified, and a methodology for its implementation is proposed. In troubleshooting practice, a diagnostic strategy may be described by the test sequencing optimization objective and the algorithm to produce the optimal test sequence. It is possible to pose a constrained optimization problem in the space of the diagnostic strategies. Solving such optimization problems requires a methodology for representing and manipulating the strategies in conjunction with the given problem in the troubleshooting domain. It is emphasized that an expert diagnostic system should be able to choose and implement a diagnostic strategy depending on the current situation. To facilitate such an ability, it is necessary to create a methodology for representing different diagnostic strategies, to provide tools for evaluating the current situation (e.g. specifics of the failure rates, costs, system topology, etc.), and to provide a mechanism for matching and implementing a diagnostic strategy in the given situation. Such an evaluation of the system under diagnosis is called metadiagnosis. A semantic control approach for a computerized strategy choice aiding tool architecture is proposed, and an example of its application is provided.

IEEE International Symposium on Intelligent Control - ISIC , 1988 ·

The need for a computerized diagnostic strategy choice aiding tool is identified, and a methodology for its implementation is proposed. In troubleshooting practice, a diagnostic strategy may be described by the test sequencing optimization objective and the algorithm to produce the optimal test sequence. It is possible to pose a constrained optimization problem in the space of the diagnostic strategies. Solving such optimization problems requires a methodology for representing and manipulating the strategies in conjunction with the given problem in the troubleshooting domain. It is emphasized that an expert diagnostic system should be able to choose and implement a diagnostic strategy depending on the current situation. To facilitate such an ability, it is necessary to create a methodology for representing different diagnostic strategies, to provide tools for evaluating the current situation (e.g. specifics of the failure rates, costs, system topology, etc.), and to provide a mechanism for matching and implementing a diagnostic strategy in the given situation. Such an evaluation of the system under diagnosis is called metadiagnosis. A semantic control approach for a computerized strategy choice aiding tool architecture is proposed, and an example of its application is provided.

Applied Mathematics Letters , vol. 1, no. 1, pp. 61-64, 1988 · Feb 3, 1988

A hierarchical approach is formulated for the shortest collision-free path construction problem. The new concepts of k-th neighborhood, k-th visibility graph and k-th shortest path are introduced. The proposed approach generalizes some earlier algorithms and allows for incremental improvement of the planned path.

SIMULATION, vol. 50 no. 1 12-24, SAGE journals · Jan 1, 1988

The field of control systems simulation needs artificial intelligence technology as much as expert systems need systems simulation tools. A functional approach for the design of Expert Systems that perform model generations and simulations is proposed. A differential games simulator design is chosen to exemplify the above ideas. The discrete-event approach, based on the geometry of the game is proposed. Results are comparable with the simulation results obtained using the imperative approach, but the interrogative approach offers faster execution and clearer sim ula tor definition. Knowledge representation of the differential games models is described using Semantic Networks. The model generation methodology is a blend of several problem-solving paradigms, and the hierarchical dynamic goal system construction serves as the basis for model generation. Prolog-based implementation of the system is suggested.

Software - Practice and Experience - SPE , vol. 17, no. 3, pp. 187-195, 1987 · Mar 3, 1987

An integrated data dictionary (IDD) is defined to contain data definitions and their applications descriptions. An IDD which includes report descriptions may be used as an input data structure for the automatic constructor of report generators (ACORG) in any data manipulation language. ACORG may be programmed in an elegant way using recursive techniques.

Computers & Mathematics With Applications - COMPUT MATH APPL , vol. 13, no. 1-3, pp. 261-274, 1987 · Feb 3, 1987

A general framework for the utilization of large numbers of optimal pursuit-evasion algorithms, as applied to air combat, is described. The framework is based upon and is driven by artificial intelligence concepts. The method employed involves the valuation of alternative tactical strategies and maneuvers through a goal system and pilot-derived expert data bases. The system is designed to display the most promising strategies to the pilot for a final decision.

Two aspects of the concept above are described here: the general framework and a specific implementation for a synthetic method of flight and fire control system optimization. Details of the implementation, based on off-the-shelf hardware and a standard programming lanuage, are also given.

Potential utilization of these concepts includes other areas as well: submarine warfare and satellite based weapon systems are two possible additional applications. Nonmilitary applications are air traffic control and optimal scheduling.

Systems & Control Letters - SYST CONTROL LETT , vol. 9, no. 4, pp. 317-322, 1987 · Jan 3, 1987

The problem of simultaneous system identification is posed and an efficient algorithm for its solution is formulated. Our algorithm is a blend of an A∗ search together with Simulated Annealing. The proposed algorithm returns the optimal solution while the number of required operations is usually much smaller than any other brute-force algorithm.

Computers & Mathematics With Applications - COMPUT MATH APPL , vol. 12, no. 7, pp. 839-858, 1986 · Dec 3, 1986

Stereotaxic localization of target points identified on single CT scan slices obtained using commercially available frame systems is widely available. These systems have intrinsic limitations imposed by the computational algorithms used to determine the stereotaxic frame settings for a desired approach to the target. The extant descriptions of stereotaxic frames are mathematically incomplete, confounding efforts to delineate the limitations of these devices and improve their versatility. This report provides a complete mathematical description for a commercially available device (Brown-Roberts-Wells stereotaxis system). This systems analysis was used to design and implement microcomputer-based (Apple Macintosh) interactive graphics software providing enhanced versatility in locating target points, selecting approach directions, and handling sequences of frame settings needed in procedures where multiple interventions are required. Application of this new software in neurosurgical procedures to uniformly distribute interstitial radiotherapy sources through tumor masses is described.

IEEE Trans Biomed Eng 33(7):723-4 · Jul 3, 1986

Robotica , vol. 4, no. 03, 1986 · Apr 4, 1986

The synchronized action of robots is important in many applications, especially in the manufacturing processes. An algorithm to obtain a smooth synchronized movement of the multiple-robot system is presented for the case when only one processor is available to control all of the robots that incorporate stepper motors. The origin of this algorithm stems from the digital differential analyzer used in machine tool control.

IEEE Transactions on Biomedical Engineering - IEEE TRANS BIOMED ENG , vol. BME-33, no. 7, pp. 723-724, 1986 ·

The method given for finding the position of a target in three-dimensional space from a CT image is incorrect. The solution is based on representation of the target vector as a linear combination of two coplanar nonparallel vectors. This results in a single linear equation with two unknowns. An alternative formulation is offered, solved, and the existence and uniqueness of the solution is proved.

Information Systems - IS , vol. 10, no. 4, pp. 371-376, 1985 · Feb 3, 1985

One of the most important issues in any database design is the optimization of its performance. The external database parameters play one of the main roles in a network database performance considerations. The analytic and the simulative approaches to establish these parameters are discussed. An heuristic approach using system simulation method to find the optimal database external parameters is developed and compared with the analytic one. The software system to implement the simulation methodology is presented. This system has been successfully implemented with the VAX-11 DBMS.

Operations Research Letters - ORL , vol. 1, no. 2, pp. 79-81, 1982 · Feb 3, 1982

In the literature the Brown Method is often recommended for forecasting with the smoothing constant α = 0.1 or α = 0.2. We describe an experiment for checking the recommendation, the results of which indicate that it has severe drawbacks. An alternative is suggested.

Conference on Decision and Control - CDC , 1989

The development of a conceptual framework for an operational, onboard, real-time multiprocessing computer system, capable of assisting the pilot in flight and fire control decisions, i.e. a tactical decision aiding expert system, is discussed. Air combat is considered as a game problem in which two opponent teams mutually endeavor to maximize their opportunities to destroy each other, while minimizing their own risk of loss. The authors describe their approach to generating and solving this game through semantic control theory, which is introduced to make such a problem accessible. Specifically, they introduced logic into the model to assist in breaking it down sufficiently so that the important aspects of the classical approach can be used at the low level of the solution hierarchy, while the high-level mechanism of the solution generates the minimal space of all possible missions, assuming team cooperation, and devises an efficient search procedure on this space to find the best mission. The difference between classical control and semantic control is essentially the introduction of a correlator block, which uses the system input and output to generate the classical feedback control law by symbolic manipulation