Here, we describe methods by which language, as a complex system, has a causal interaction with probable states of qualia amplitudes at the quantum scale, causing information, energy, mass biases in complex living systems & technoculture.
| + | = 0
0 + 0 = 0
0 + 0 = |
| + | = ||
0 = -|
Art or Memetics, Physics or Physicks, Complex Systems Information Processing Methods or Technoculture, Game Theory, Biomechanics
With this method, we now re-assign management of earth living systems to earth living systems and living technologies.
0 : 0 Sources
0 : 0 Technoculture : Toward A Unified Theory of Information
0 : 0 Another World Is Possible, but Everything Is Terrible: Using Memetics as a Method toward Developing an Evolving Legal & Ethical Framework for Alternate(net) Realities.
99 blocks • about 9 hours ago
Strategies of deductive reasoning
As compared with definitory rules, strategic rules of reasoning have received relatively scant attention from logicians and philosophers. Indeed, most of the detailed work on strategies of logical reasoning has taken place in the field of computer science. From a logical vantage point, an instructive observation was offered by the Dutch logician-philosopher Evert W. Beth in 1955 and independently (in a slightly different form) by the Finnish philosopher Jaakko Hintikka. Both pointed out that certain proof methods, which Beth called tableau methods, can be interpreted as frustrated attempts to prove the negation of the intended conclusion. For example, in order to show that a certain formula F logically implies another formula G, one tries to construct in step-by-step fashion a model of the logical system (i.e., an assignment of values to its names and predicates) in which F is true but G is false. If this procedure is frustrated in all possible directions, one can conclude that G is a logical consequence of F.
The number of steps required to show that the countermodel is frustrated in all directions depends on the formula to be proved. Because this number cannot be predicted mechanically (i.e., by means of a recursive function) on the basis of the structures of F and G, the logician must otherwise anticipate and direct the course of the construction process (see decision problem). In other words, he must somehow envisage what the state of the attempted countermodel will be after future construction steps.
Such a construction process involves two kinds of steps pertaining to the objects in the model. New objects are introduced by a rule known as existential instantiation. If the model to be constructed must satisfy, or render true, an existential statement (e.g., “there is at least one mammal”), one may introduce a new object to instantiate it (“a is a mammal”). Such a step of reasoning is analogous to what a judge does when he says, “We know that someone committed this crime. Let us call the perpetrator John Doe.” In another kind of step, known as universal instantiation, a universal statement to be satisfied by the model (e.g., “everything is a mammal”) is applied to objects already introduced (“Moby Dick is a mammal”).
There are difficulties in anticipating the results of steps of either kind. If the number of existential instantiations required in the proof is known, the question of whether G follows from F can be decided in a finite number of steps. In some proofs, however, universal instantiations are required in such large numbers as the proof proceeds that even the most powerful computers cannot produce them fast enough. Thus, efficient deductive strategies must specify which objects to introduce by existential instantiation and must also limit the class of universal instantiations that need to be carried out.
Constructions of countermodels also involve the application of rules that apply to the propositional connectives ~, &, ∨, and ⊃ (“not,” “and,” “or,” and “if…then,” respectively). Such rules have the effect of splitting the attempted construction into several alternative constructions. Thus, the strategic question as to which universal instantiations are needed can often be answered more easily after the construction has proceeded beyond the point at which splitting occurs. Methods of automated theorem-proving that allow such delayed instantiation have been developed. This delay involves temporarily replacing bound variables (variables within the scope of an existential or universal quantifying expression, as in “some x is ...” and “any x is ...”) by uninterpreted “dummy” symbols. The problem of finding the right instantiations then becomes a problem of solving sets of functional equations with dummies as unknowns. Such problems are known as unification problems, and algorithms for solving them have been developed by computer scientists.
"Semantic Representation of Mathematical Knowledge" was held at the Fields Institute in Toronto during February 3–5, 2016. The goal of the workshop was to lay down the foundations of a prototype semantic representation language for the GDML. The workshop's organizers recognized that the extremely wide scope of mathematics as a whole made it unrealistic to map out the detailed concepts, structures, and operations needed and used in individual mathematical subjects. The workshop therefore limited itself to surveys of the status quo in mathematical representation languages including representation of prominent and fundamental theorems in certain areas that could serve as building blocks for additional mathematical results, and to discussing ways to best identify and design semantic components for individual disciplines of mathematics.