Package org.idoox.xml

Contains classes and interfaces which are used to work with XML data.

See:
          Description

Interface Summary
Tokenizer Tokenizes stream containing XML into XML tokens.
TokenizerFactory Factory for XML tokenizers.
TokenizerWrapper.TokenizerState This interface represents internal state of TokenizerWrapper.
TokenWriter Writes a stream of XML tokens into XML stream.
TokenWriterFactory Factory for XML writers.
 

Class Summary
Attribute Representation of an element attribute.
DeclaredPrefixesStack This class represents a stack of declared namespace URI to prefix mapping.
Token This class is a java mapping to the start/end of an element.
TokenizerResolver Converts Tokenizer to canonical form according to the Exclusive XML Canonicalization specification.
TokenizerResolver.PrefixesStack  
TokenizerSource This class represents WASOP sepecific source.
TokenizerWrapper This class helps you to wrap Tokenizers.
TokenizerWrapper.DefaultTokenizerState This is default implementation of internal tokenizers state.
XMLWriterReader This class represents XMLWriter (TokenWriter) which can be also used as XMLReader (Tokenizer).
 

Exception Summary
TokenizerException Thrown by tokenizer if the XML being parsed is invalid or when call conditions are not fulfiled.
 

Package org.idoox.xml Description

Contains classes and interfaces which are used to work with XML data. With pull parser interface Tokenizer you can read XML data. Writing is done conversely via the TokenWriter interface.

Both of these interfaces represent the streaming approach to XML processing (XML document is represented as sequence of XML Tokens), although a Tokenizer can be translated into a DOM Element tree using its method Tokenizer.getDOMRepresentation(org.w3c.dom.Document).

Example of reading XML via Tokenizer:

        Tokenizer tokenizer;
        /* tokenizer must be set here*/
        Token token = new Token();
        byte type;
        while ((type = tokenizer.next()) != Tokenizer.END_DOCUMENT){
            switch(type){
            case Tokenizer.START_TOKEN:
                    tokenizer.readToken(token);
                    System.out.println("Start of element: "+token.getLocalName());
                    break;
            case Tokenizer.END_TOKEN:
                    tokenizer.readToken(token);
                    System.out.println("End of element: "+token.getLocalName());
                    break;
            case Tokenizer.CONTENT:
                    System.out.println("Elm:"+tokenizer.readContent());
                    break;
            }
        }