We are having some issue with XML file generation using JAXB. Even though the marshalling is successfully completed the generated XML file is corrupted sometimes(Everyday we generate around 200 xml files each is 150MB in size, so far only 2 files are corrupted in two months).
There were no errors reported in the log file(catalina.out) even though we log each exceptions.
But when we rerun the job, the files are generated successfully.
The application uses the following code segment to marshall the Java object to XML.
public static String marshall(final Marshaller marshaller, final Object obj, final String transformFileName)
throws ServiceException {
File file1 = null;
try {
file1 = new File(transformFileName);
marshaller.marshal(obj, new StreamResult( file1 ) );
return file1.getAbsolutePath();
} catch (XmlMappingException e) {
throw new ServiceException(e);
} catch (IOException e) {
throw new ServiceException(e);
}
}
The following is the bean definition of marshaller
<bean id="bankStatementMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<property name="contextPath" value="*.bankstatement" />
<property name="schema" value="classpath:Statement.xsd" />
</bean>
The marshaller is used by multiple threads at the same time, we have already checked the spring code that for every call of marshall, spring create new marshaller object. So we ruled out the concurrency issue(As per our understanding).
In addition to this, we are using the NFS file system to create all the XML files.
And when we are going through the JAXB Marshaller implementation, we found the following code segment that ignores the IOException when flushing the streams(The cleanup method below).
private void write(Object obj, XmlOutput out, Runnable postInitAction) throws JAXBException {
try {
if( obj == null )
throw new IllegalArgumentException(Messages.NOT_MARSHALLABLE.format());
if( schema!=null ) {
// send the output to the validator as well
ValidatorHandler validator = schema.newValidatorHandler();
validator.setErrorHandler(new FatalAdapter(serializer));
// work around a bug in JAXP validator in Tiger
XMLFilterImpl f = new XMLFilterImpl() {
@Override
public void startPrefixMapping(String prefix, String uri) throws SAXException {
super.startPrefixMapping(prefix.intern(), uri.intern());
}
};
f.setContentHandler(validator);
out = new ForkXmlOutput( new SAXOutput(f) {
@Override
public void startDocument(XMLSerializer serializer, boolean fragment, int[] nsUriIndex2prefixIndex, NamespaceContextImpl nsContext) throws SAXException, IOException, XMLStreamException {
super.startDocument(serializer, false, nsUriIndex2prefixIndex, nsContext);
}
@Override
public void endDocument(boolean fragment) throws SAXException, IOException, XMLStreamException {
super.endDocument(false);
}
}, out );
}
try {
prewrite(out,isFragment(),postInitAction);
serializer.childAsRoot(obj);
postwrite();
} catch( SAXException e ) {
throw new MarshalException(e);
} catch (IOException e) {
throw new MarshalException(e);
} catch (XMLStreamException e) {
throw new MarshalException(e);
} finally {
serializer.close();
}
} finally {
cleanUp();
}
}
private void cleanUp() {
if(toBeFlushed!=null)
try {
toBeFlushed.flush();
} catch (IOException e) {
// ignore
}
if(toBeClosed!=null)
try {
toBeClosed.close();
} catch (IOException e) {
// ignore
}
toBeFlushed = null;
toBeClosed = null;
}
Can anyone suggest what could be the potential issue? Something like
-
Since we use multiple thread, concurrency issue can cause this corrupted file generation?
-
Since we use NFS, the nonavailability of NFS during the generation can cause the corrupted file generation?
-
Since we generate lot of big size xml, memory usage during the generation is up to 80%. Can this cause the corrupted xml file generation?
Regards, Mayuran
Aucun commentaire:
Enregistrer un commentaire