Quantcast

OutOfMemoryException while transforming large XML to PDF

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
I get a OutOfMemoryException (Java heap space) while transforming a relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using these steps for the transformation:

---
FOUserAgent userAgent = fopFactory.newFOUserAgent();
Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent, out);
TransformerFactory factory = TransformerFactory.newInstance();
Transformer transformer = factory.newTransformer(new StreamSource(xslFile));
transformer.setParameter("versionParam", "2.0");
Source src = new StreamSource(xmlFile);
Result res = new SAXResult(fop.getDefaultHandler());
transformer.transform(src, res);
---

I have tried to increase the initial and maximum heap size (with options -Xms and -Xmx) at JVM startup but with no success. While transforming, I'm monitoring the size of the used and maximum tenured generation memory pool. The options don't seem to affect the tenured pool size: this pool is continuously getting full in 3, 4 minutes which results shortly after that in this exception.

What are my options to prevent the OutOfMemoryException?

Best regards,
Dennis van Zoerlandt
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

AW: OutOfMemoryException while transforming large XML to PDF

Georg Datterl
Hi Dennis,

Make sure you die during pdf creation, not during transformation.
Keep your page-sequences as short as possible. Starting a new page-sequence releases memory used by previous page-sequence
If you have many images which are only used once, deactivate image cache.
This topic is fairly common in this list, so check the archive for details.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de

-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:[hidden email]]
Gesendet: Freitag, 25. März 2011 09:51
An: [hidden email]
Betreff: OutOfMemoryException while transforming large XML to PDF


I get a OutOfMemoryException (Java heap space) while transforming a
relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using these
steps for the transformation:

---
FOUserAgent userAgent = fopFactory.newFOUserAgent();
Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent, out);
TransformerFactory factory = TransformerFactory.newInstance();
Transformer transformer = factory.newTransformer(new StreamSource(xslFile));
transformer.setParameter("versionParam", "2.0");
Source src = new StreamSource(xmlFile);
Result res = new SAXResult(fop.getDefaultHandler());
transformer.transform(src, res);
---

I have tried to increase the initial and maximum heap size (with options
-Xms and -Xmx) at JVM startup but with no success. While transforming, I'm
monitoring the size of the used and maximum tenured generation memory pool.
The options don't seem to affect the tenured pool size: this pool is
continuously getting full in 3, 4 minutes which results shortly after that
in this exception.

What are my options to prevent the OutOfMemoryException?

Best regards,
Dennis van Zoerlandt
--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
Hi Georg,

As far as I'm understanding it, the transformation is the PDF creation?

The image cache is already cleared after each converted file.

I'll search the list archive for other options. I was hoping for certain JVM settings which could increase the tenured memory pool size.

Best regards,
Dennis van Zoerlandt

Georg Datterl wrote
Hi Dennis,

Make sure you die during pdf creation, not during transformation.
Keep your page-sequences as short as possible. Starting a new page-sequence releases memory used by previous page-sequence
If you have many images which are only used once, deactivate image cache.
This topic is fairly common in this list, so check the archive for details.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de

-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
Gesendet: Freitag, 25. März 2011 09:51
An: fop-users@xmlgraphics.apache.org
Betreff: OutOfMemoryException while transforming large XML to PDF


I get a OutOfMemoryException (Java heap space) while transforming a
relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using these
steps for the transformation:

---
FOUserAgent userAgent = fopFactory.newFOUserAgent();
Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent, out);
TransformerFactory factory = TransformerFactory.newInstance();
Transformer transformer = factory.newTransformer(new StreamSource(xslFile));
transformer.setParameter("versionParam", "2.0");
Source src = new StreamSource(xmlFile);
Result res = new SAXResult(fop.getDefaultHandler());
transformer.transform(src, res);
---

I have tried to increase the initial and maximum heap size (with options
-Xms and -Xmx) at JVM startup but with no success. While transforming, I'm
monitoring the size of the used and maximum tenured generation memory pool.
The options don't seem to affect the tenured pool size: this pool is
continuously getting full in 3, 4 minutes which results shortly after that
in this exception.

What are my options to prevent the OutOfMemoryException?

Best regards,
Dennis van Zoerlandt
--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

AW: AW: OutOfMemoryException while transforming large XML to PDF

Georg Datterl
Hi Dennis,

There are two steps: XML+XSLT-> FO that's the transformation. Done by saxon or xerces, usually. Anyway, not the main concern on this list.
FO->PDF that’s the creation. That's FOPs part and the main concern here.
If your process dies during the first step, there's no use in giving you hints how to reduce memory consumption in the second step, obviously. If you are not sure, you could post the last few lines of debug output before the Exception appears. Maybe that can give a hint.
As for the JVM settings: I create my larger PDFs with -Xmx2000m -Xincgc

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:[hidden email]]
Gesendet: Freitag, 25. März 2011 10:28
An: [hidden email]
Betreff: Re: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

As far as I'm understanding it, the transformation is the PDF creation?

The image cache is already cleared after each converted file.

I'll search the list archive for other options. I was hoping for certain JVM
settings which could increase the tenured memory pool size.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:

>
> Hi Dennis,
>
> Make sure you die during pdf creation, not during transformation.
> Keep your page-sequences as short as possible. Starting a new
> page-sequence releases memory used by previous page-sequence
> If you have many images which are only used once, deactivate image cache.
> This topic is fairly common in this list, so check the archive for
> details.
>
> Regards,
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:[hidden email]]
> Gesendet: Freitag, 25. März 2011 09:51
> An: [hidden email]
> Betreff: OutOfMemoryException while transforming large XML to PDF
>
>
> I get a OutOfMemoryException (Java heap space) while transforming a
> relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using these
> steps for the transformation:
>
> ---
> FOUserAgent userAgent = fopFactory.newFOUserAgent();
> Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent, out);
> TransformerFactory factory = TransformerFactory.newInstance();
> Transformer transformer = factory.newTransformer(new
> StreamSource(xslFile));
> transformer.setParameter("versionParam", "2.0");
> Source src = new StreamSource(xmlFile);
> Result res = new SAXResult(fop.getDefaultHandler());
> transformer.transform(src, res);
> ---
>
> I have tried to increase the initial and maximum heap size (with options
> -Xms and -Xmx) at JVM startup but with no success. While transforming, I'm
> monitoring the size of the used and maximum tenured generation memory
> pool.
> The options don't seem to affect the tenured pool size: this pool is
> continuously getting full in 3, 4 minutes which results shortly after that
> in this exception.
>
> What are my options to prevent the OutOfMemoryException?
>
> Best regards,
> Dennis van Zoerlandt
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
> Sent from the FOP - Users mailing list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236224.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
Hi Georg,

Now I understand. Is it correct to say that during the transformer.transform() method both the transformation as well as the PDF creation is being performed? If not, which part of my code performs the creation of the PDF?

Hereby the debug logging before the Exception is thrown:

---
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager: org.apache.fop.layoutmgr.BlockLayoutManager@35e6e3[fobj=org.apache.fop.fo.flow.Block@be76c7[@id=]]: Border Rel
Side:after -> MinOptMax[min = 566, opt = 566, max = 566]
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager: org.apache.fop.layoutmgr.BlockLayoutManager@c9630a[fobj=org.apache.fop.fo.flow.Block@115126e[@id=]]: Space Rel
Side:before, null-> MinOptMax[min = 14173, opt = 14173, max = 14173]
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker: PLM> part: 1, start at pos 0, break at pos 3, break class = ANY
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker:      addAreas from 0 to 0
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker: signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractPageSequenceLayoutManager: page finished: 26, current num: 26
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AreaTreeHandler: Last page-sequence produced 2 pages.
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) PageSequenceLayoutManager: Ending layout
---

And the first part of the stack trace:
---
[2011-03-25 14:02:56] java.lang.OutOfMemoryError: Java heap space
        at org.apache.fop.fo.FOText.charIterator(FOText.java:223)
        at org.apache.fop.fo.RecursiveCharIterator.getNextCharIter(RecursiveCharIterator.java:104)
        at org.apache.fop.fo.RecursiveCharIterator.<init>(RecursiveCharIterator.java:62)
        at org.apache.fop.fo.XMLWhiteSpaceHandler.handleWhiteSpace(XMLWhiteSpaceHandler.java:157)
        at org.apache.fop.fo.FObjMixed.handleWhiteSpaceFor(FObjMixed.java:87)
        at org.apache.fop.fo.FObjMixed.finalizeNode(FObjMixed.java:176)
        at org.apache.fop.fo.FONode.endOfNode(FONode.java:326)
        at org.apache.fop.fo.FObjMixed.endOfNode(FObjMixed.java:69)
        at org.apache.fop.fo.flow.Block.endOfNode(Block.java:148)
        at org.apache.fop.fo.FOTreeBuilder$MainFOHandler.endElement(FOTreeBuilder.java:349)
        at org.apache.fop.fo.FOTreeBuilder.endElement(FOTreeBuilder.java:177)
        at com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown Source)
        at com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown Source)
---

It's notable that between the last debug message (13:58:24) and the exception message (14:02:56) there is more than 3 minutes of no logging. The only thing I see is that the tenured generation memory pool is being filled. In less than a minute from the last debug message the full tenured memory is full. Still, it takes until 14:02:56 to throw an OutOfMemory exception.

I tried your VM settings, but unfortunately without success.

Best regards,
Dennis van Zoerlandt

Georg Datterl wrote
Hi Dennis,

There are two steps: XML+XSLT-> FO that's the transformation. Done by saxon or xerces, usually. Anyway, not the main concern on this list.
FO->PDF that’s the creation. That's FOPs part and the main concern here.
If your process dies during the first step, there's no use in giving you hints how to reduce memory consumption in the second step, obviously. If you are not sure, you could post the last few lines of debug output before the Exception appears. Maybe that can give a hint.
As for the JVM settings: I create my larger PDFs with -Xmx2000m -Xincgc

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
Gesendet: Freitag, 25. März 2011 10:28
An: fop-users@xmlgraphics.apache.org
Betreff: Re: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

As far as I'm understanding it, the transformation is the PDF creation?

The image cache is already cleared after each converted file.

I'll search the list archive for other options. I was hoping for certain JVM
settings which could increase the tenured memory pool size.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:
>
> Hi Dennis,
>
> Make sure you die during pdf creation, not during transformation.
> Keep your page-sequences as short as possible. Starting a new
> page-sequence releases memory used by previous page-sequence
> If you have many images which are only used once, deactivate image cache.
> This topic is fairly common in this list, so check the archive for
> details.
>
> Regards,
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
> Gesendet: Freitag, 25. März 2011 09:51
> An: fop-users@xmlgraphics.apache.org
> Betreff: OutOfMemoryException while transforming large XML to PDF
>
>
> I get a OutOfMemoryException (Java heap space) while transforming a
> relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using these
> steps for the transformation:
>
> ---
> FOUserAgent userAgent = fopFactory.newFOUserAgent();
> Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent, out);
> TransformerFactory factory = TransformerFactory.newInstance();
> Transformer transformer = factory.newTransformer(new
> StreamSource(xslFile));
> transformer.setParameter("versionParam", "2.0");
> Source src = new StreamSource(xmlFile);
> Result res = new SAXResult(fop.getDefaultHandler());
> transformer.transform(src, res);
> ---
>
> I have tried to increase the initial and maximum heap size (with options
> -Xms and -Xmx) at JVM startup but with no success. While transforming, I'm
> monitoring the size of the used and maximum tenured generation memory
> pool.
> The options don't seem to affect the tenured pool size: this pool is
> continuously getting full in 3, 4 minutes which results shortly after that
> in this exception.
>
> What are my options to prevent the OutOfMemoryException?
>
> Best regards,
> Dennis van Zoerlandt
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
> Sent from the FOP - Users mailing list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236224.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Georg Datterl
Hi Dennis,

OK, we are definitely in the creation phase. And it seems like it's not an obvious problem. Could you run your xml and your xslt through a transformer (XmlSpy, for example) so we have something to work on? Maybe you could even feed the file to fop then? That should result in the same Exception and give us something to work on, until the real experts can help you.

Mit freundlichen Grüßen

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:[hidden email]]
Gesendet: Freitag, 25. März 2011 14:09
An: [hidden email]
Betreff: Re: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

Now I understand. Is it correct to say that during the
transformer.transform() method both the transformation as well as the PDF
creation is being performed? If not, which part of my code performs the
creation of the PDF?

Hereby the debug logging before the Exception is thrown:

---
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
org.apache.fop.layoutmgr.BlockLayoutManager@35e6e3[fobj=org.apache.fop.fo.flow.Block@be76c7[@id=]]:
Border Rel
Side:after -> MinOptMax[min = 566, opt = 566, max = 566]
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
org.apache.fop.layoutmgr.BlockLayoutManager@c9630a[fobj=org.apache.fop.fo.flow.Block@115126e[@id=]]:
Space Rel
Side:before, null-> MinOptMax[min = 14173, opt = 14173, max = 14173]
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker: PLM>
part: 1, start at pos 0, break at pos 3, break class = ANY
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker:
addAreas from 0 to 0
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG)
AbstractPageSequenceLayoutManager: page finished: 26, current num: 26
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AreaTreeHandler: Last
page-sequence produced 2 pages.
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) PageSequenceLayoutManager:
Ending layout
---

And the first part of the stack trace:
---
[2011-03-25 14:02:56] java.lang.OutOfMemoryError: Java heap space
        at org.apache.fop.fo.FOText.charIterator(FOText.java:223)
        at
org.apache.fop.fo.RecursiveCharIterator.getNextCharIter(RecursiveCharIterator.java:104)
        at
org.apache.fop.fo.RecursiveCharIterator.<init>(RecursiveCharIterator.java:62)
        at
org.apache.fop.fo.XMLWhiteSpaceHandler.handleWhiteSpace(XMLWhiteSpaceHandler.java:157)
        at
org.apache.fop.fo.FObjMixed.handleWhiteSpaceFor(FObjMixed.java:87)
        at org.apache.fop.fo.FObjMixed.finalizeNode(FObjMixed.java:176)
        at org.apache.fop.fo.FONode.endOfNode(FONode.java:326)
        at org.apache.fop.fo.FObjMixed.endOfNode(FObjMixed.java:69)
        at org.apache.fop.fo.flow.Block.endOfNode(Block.java:148)
        at
org.apache.fop.fo.FOTreeBuilder$MainFOHandler.endElement(FOTreeBuilder.java:349)
        at
org.apache.fop.fo.FOTreeBuilder.endElement(FOTreeBuilder.java:177)
        at
com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
Source)
        at
com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
Source)
---

It's notable that between the last debug message (13:58:24) and the
exception message (14:02:56) there is more than 3 minutes of no logging. The
only thing I see is that the tenured generation memory pool is being filled.
In less than a minute from the last debug message the full tenured memory is
full. Still, it takes until 14:02:56 to throw an OutOfMemory exception.

I tried your VM settings, but unfortunately without success.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:

>
> Hi Dennis,
>
> There are two steps: XML+XSLT-> FO that's the transformation. Done by
> saxon or xerces, usually. Anyway, not the main concern on this list.
> FO->PDF that’s the creation. That's FOPs part and the main concern here.
> If your process dies during the first step, there's no use in giving you
> hints how to reduce memory consumption in the second step, obviously. If
> you are not sure, you could post the last few lines of debug output before
> the Exception appears. Maybe that can give a hint.
> As for the JVM settings: I create my larger PDFs with -Xmx2000m -Xincgc
>
> Regards,
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:[hidden email]]
> Gesendet: Freitag, 25. März 2011 10:28
> An: [hidden email]
> Betreff: Re: AW: OutOfMemoryException while transforming large XML to PDF
>
>
> Hi Georg,
>
> As far as I'm understanding it, the transformation is the PDF creation?
>
> The image cache is already cleared after each converted file.
>
> I'll search the list archive for other options. I was hoping for certain
> JVM
> settings which could increase the tenured memory pool size.
>
> Best regards,
> Dennis van Zoerlandt
>
>
> Georg Datterl wrote:
>>
>> Hi Dennis,
>>
>> Make sure you die during pdf creation, not during transformation.
>> Keep your page-sequences as short as possible. Starting a new
>> page-sequence releases memory used by previous page-sequence
>> If you have many images which are only used once, deactivate image cache.
>> This topic is fairly common in this list, so check the archive for
>> details.
>>
>> Regards,
>>
>> Georg Datterl
>>
>> ------ Kontakt ------
>>
>> Georg Datterl
>>
>> Geneon media solutions gmbh
>> Gutenstetter Straße 8a
>> 90449 Nürnberg
>>
>> HRB Nürnberg: 17193
>> Geschäftsführer: Yong-Harry Steiert
>>
>> Tel.: 0911/36 78 88 - 26
>> Fax: 0911/36 78 88 - 20
>>
>> www.geneon.de
>>
>> Weitere Mitglieder der Willmy MediaGroup:
>>
>> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
>> Willmy PrintMedia GmbH:                      www.willmy.de
>> Willmy Consult & Content GmbH:               www.willmycc.de
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Dennis van Zoerlandt [mailto:[hidden email]]
>> Gesendet: Freitag, 25. März 2011 09:51
>> An: [hidden email]
>> Betreff: OutOfMemoryException while transforming large XML to PDF
>>
>>
>> I get a OutOfMemoryException (Java heap space) while transforming a
>> relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using these
>> steps for the transformation:
>>
>> ---
>> FOUserAgent userAgent = fopFactory.newFOUserAgent();
>> Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent, out);
>> TransformerFactory factory = TransformerFactory.newInstance();
>> Transformer transformer = factory.newTransformer(new
>> StreamSource(xslFile));
>> transformer.setParameter("versionParam", "2.0");
>> Source src = new StreamSource(xmlFile);
>> Result res = new SAXResult(fop.getDefaultHandler());
>> transformer.transform(src, res);
>> ---
>>
>> I have tried to increase the initial and maximum heap size (with options
>> -Xms and -Xmx) at JVM startup but with no success. While transforming,
>> I'm
>> monitoring the size of the used and maximum tenured generation memory
>> pool.
>> The options don't seem to affect the tenured pool size: this pool is
>> continuously getting full in 3, 4 minutes which results shortly after
>> that
>> in this exception.
>>
>> What are my options to prevent the OutOfMemoryException?
>>
>> Best regards,
>> Dennis van Zoerlandt
>> --
>> View this message in context:
>> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
>> Sent from the FOP - Users mailing list archive at Nabble.com.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>
>>
>
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236224.html
> Sent from the FOP - Users mailing list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31237755.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
Hi Georg,

I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it seems Altova doesn't support FOP 1.0).

I'm not really sure how to feed the FO file to FOP? Can I just put the FO file as source file for the transformer.transform()?

I'll get back to you when I have a FO file. My pc's currently performing very badly, so it seems it's a heavy job.

Best regards,
Dennis van Zoerlandt

Georg Datterl wrote
Hi Dennis,

OK, we are definitely in the creation phase. And it seems like it's not an obvious problem. Could you run your xml and your xslt through a transformer (XmlSpy, for example) so we have something to work on? Maybe you could even feed the file to fop then? That should result in the same Exception and give us something to work on, until the real experts can help you.

Mit freundlichen Grüßen

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
Gesendet: Freitag, 25. März 2011 14:09
An: fop-users@xmlgraphics.apache.org
Betreff: Re: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

Now I understand. Is it correct to say that during the
transformer.transform() method both the transformation as well as the PDF
creation is being performed? If not, which part of my code performs the
creation of the PDF?

Hereby the debug logging before the Exception is thrown:

---
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
org.apache.fop.layoutmgr.BlockLayoutManager@35e6e3[fobj=org.apache.fop.fo.flow.Block@be76c7[@id=]]:
Border Rel
Side:after -> MinOptMax[min = 566, opt = 566, max = 566]
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
org.apache.fop.layoutmgr.BlockLayoutManager@c9630a[fobj=org.apache.fop.fo.flow.Block@115126e[@id=]]:
Space Rel
Side:before, null-> MinOptMax[min = 14173, opt = 14173, max = 14173]
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker: PLM>
part: 1, start at pos 0, break at pos 3, break class = ANY
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker:
addAreas from 0 to 0
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
signalIDProcessed()
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG)
AbstractPageSequenceLayoutManager: page finished: 26, current num: 26
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AreaTreeHandler: Last
page-sequence produced 2 pages.
[2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) PageSequenceLayoutManager:
Ending layout
---

And the first part of the stack trace:
---
[2011-03-25 14:02:56] java.lang.OutOfMemoryError: Java heap space
        at org.apache.fop.fo.FOText.charIterator(FOText.java:223)
        at
org.apache.fop.fo.RecursiveCharIterator.getNextCharIter(RecursiveCharIterator.java:104)
        at
org.apache.fop.fo.RecursiveCharIterator.<init>(RecursiveCharIterator.java:62)
        at
org.apache.fop.fo.XMLWhiteSpaceHandler.handleWhiteSpace(XMLWhiteSpaceHandler.java:157)
        at
org.apache.fop.fo.FObjMixed.handleWhiteSpaceFor(FObjMixed.java:87)
        at org.apache.fop.fo.FObjMixed.finalizeNode(FObjMixed.java:176)
        at org.apache.fop.fo.FONode.endOfNode(FONode.java:326)
        at org.apache.fop.fo.FObjMixed.endOfNode(FObjMixed.java:69)
        at org.apache.fop.fo.flow.Block.endOfNode(Block.java:148)
        at
org.apache.fop.fo.FOTreeBuilder$MainFOHandler.endElement(FOTreeBuilder.java:349)
        at
org.apache.fop.fo.FOTreeBuilder.endElement(FOTreeBuilder.java:177)
        at
com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
Source)
        at
com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
Source)
---

It's notable that between the last debug message (13:58:24) and the
exception message (14:02:56) there is more than 3 minutes of no logging. The
only thing I see is that the tenured generation memory pool is being filled.
In less than a minute from the last debug message the full tenured memory is
full. Still, it takes until 14:02:56 to throw an OutOfMemory exception.

I tried your VM settings, but unfortunately without success.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:
>
> Hi Dennis,
>
> There are two steps: XML+XSLT-> FO that's the transformation. Done by
> saxon or xerces, usually. Anyway, not the main concern on this list.
> FO->PDF that’s the creation. That's FOPs part and the main concern here.
> If your process dies during the first step, there's no use in giving you
> hints how to reduce memory consumption in the second step, obviously. If
> you are not sure, you could post the last few lines of debug output before
> the Exception appears. Maybe that can give a hint.
> As for the JVM settings: I create my larger PDFs with -Xmx2000m -Xincgc
>
> Regards,
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
> Gesendet: Freitag, 25. März 2011 10:28
> An: fop-users@xmlgraphics.apache.org
> Betreff: Re: AW: OutOfMemoryException while transforming large XML to PDF
>
>
> Hi Georg,
>
> As far as I'm understanding it, the transformation is the PDF creation?
>
> The image cache is already cleared after each converted file.
>
> I'll search the list archive for other options. I was hoping for certain
> JVM
> settings which could increase the tenured memory pool size.
>
> Best regards,
> Dennis van Zoerlandt
>
>
> Georg Datterl wrote:
>>
>> Hi Dennis,
>>
>> Make sure you die during pdf creation, not during transformation.
>> Keep your page-sequences as short as possible. Starting a new
>> page-sequence releases memory used by previous page-sequence
>> If you have many images which are only used once, deactivate image cache.
>> This topic is fairly common in this list, so check the archive for
>> details.
>>
>> Regards,
>>
>> Georg Datterl
>>
>> ------ Kontakt ------
>>
>> Georg Datterl
>>
>> Geneon media solutions gmbh
>> Gutenstetter Straße 8a
>> 90449 Nürnberg
>>
>> HRB Nürnberg: 17193
>> Geschäftsführer: Yong-Harry Steiert
>>
>> Tel.: 0911/36 78 88 - 26
>> Fax: 0911/36 78 88 - 20
>>
>> www.geneon.de
>>
>> Weitere Mitglieder der Willmy MediaGroup:
>>
>> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
>> Willmy PrintMedia GmbH:                      www.willmy.de
>> Willmy Consult & Content GmbH:               www.willmycc.de
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
>> Gesendet: Freitag, 25. März 2011 09:51
>> An: fop-users@xmlgraphics.apache.org
>> Betreff: OutOfMemoryException while transforming large XML to PDF
>>
>>
>> I get a OutOfMemoryException (Java heap space) while transforming a
>> relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using these
>> steps for the transformation:
>>
>> ---
>> FOUserAgent userAgent = fopFactory.newFOUserAgent();
>> Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent, out);
>> TransformerFactory factory = TransformerFactory.newInstance();
>> Transformer transformer = factory.newTransformer(new
>> StreamSource(xslFile));
>> transformer.setParameter("versionParam", "2.0");
>> Source src = new StreamSource(xmlFile);
>> Result res = new SAXResult(fop.getDefaultHandler());
>> transformer.transform(src, res);
>> ---
>>
>> I have tried to increase the initial and maximum heap size (with options
>> -Xms and -Xmx) at JVM startup but with no success. While transforming,
>> I'm
>> monitoring the size of the used and maximum tenured generation memory
>> pool.
>> The options don't seem to affect the tenured pool size: this pool is
>> continuously getting full in 3, 4 minutes which results shortly after
>> that
>> in this exception.
>>
>> What are my options to prevent the OutOfMemoryException?
>>
>> Best regards,
>> Dennis van Zoerlandt
>> --
>> View this message in context:
>> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
>> Sent from the FOP - Users mailing list archive at Nabble.com.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
>> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
>> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>>
>>
>>
>
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236224.html
> Sent from the FOP - Users mailing list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31237755.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Eric Douglas
How to feed the FO file?
Using the command line transform you just pass it to the -fo parameter.

Using embedded code I feed it as a SAXSource.
Reading in from file you have:
java.io.File
java.io.FileInputStream
org.xml.sax.InputSource
javax.xml.transform.sax.SAXSource

Creates the input.
Pass that in to:
javax.xml.transform.TransformerFactory
javax.xml.transform.Transformer
I'm not sure it's required but I copied this code from the FOP website:
Transformer.setParameter("versionParam", "2.0")

Then your transform, with the FOP output generated with these classes.
org.apache.fop.apps.FopFactory
org.apache.fop.apps.Fop
javax.xml.transform.sax.SAXResult

The SAXResult is created from the Fop.getDefaultHandler.
The Transformer can be created with an XSL file parameter to pass XML
into the transform, or with no XSL to pass the FO in.


-----Original Message-----
From: Dennis van Zoerlandt [mailto:[hidden email]]
Sent: Friday, March 25, 2011 10:30 AM
To: [hidden email]
Subject: Re: AW: AW: AW: OutOfMemoryException while transforming large
XML to PDF


Hi Georg,

I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it
seems Altova doesn't support FOP 1.0).

I'm not really sure how to feed the FO file to FOP? Can I just put the
FO file as source file for the transformer.transform()?

I'll get back to you when I have a FO file. My pc's currently performing
very badly, so it seems it's a heavy job.

Best regards,
Dennis van Zoerlandt


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Kindaian
The major problem i've see in java is the heap memory... It gets
exausted on very big jobs.

The only alternatives are:

1. cut the job in smaller chunks...
2. move all the environment to 64bits...

The 64bit platform will allow you to allocate more ram to the process
and surpass the 1.2gb (or something nearby) of the 32 bit platform per
process.

Cheers,
LF


On 25/03/2011 16:13, Eric Douglas wrote:

> How to feed the FO file?
> Using the command line transform you just pass it to the -fo parameter.
>
> Using embedded code I feed it as a SAXSource.
> Reading in from file you have:
> java.io.File
> java.io.FileInputStream
> org.xml.sax.InputSource
> javax.xml.transform.sax.SAXSource
>
> Creates the input.
> Pass that in to:
> javax.xml.transform.TransformerFactory
> javax.xml.transform.Transformer
> I'm not sure it's required but I copied this code from the FOP website:
> Transformer.setParameter("versionParam", "2.0")
>
> Then your transform, with the FOP output generated with these classes.
> org.apache.fop.apps.FopFactory
> org.apache.fop.apps.Fop
> javax.xml.transform.sax.SAXResult
>
> The SAXResult is created from the Fop.getDefaultHandler.
> The Transformer can be created with an XSL file parameter to pass XML
> into the transform, or with no XSL to pass the FO in.
>
>
> -----Original Message-----
> From: Dennis van Zoerlandt [mailto:[hidden email]]
> Sent: Friday, March 25, 2011 10:30 AM
> To: [hidden email]
> Subject: Re: AW: AW: AW: OutOfMemoryException while transforming large
> XML to PDF
>
>
> Hi Georg,
>
> I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it
> seems Altova doesn't support FOP 1.0).
>
> I'm not really sure how to feed the FO file to FOP? Can I just put the
> FO file as source file for the transformer.transform()?
>
> I'll get back to you when I have a FO file. My pc's currently performing
> very badly, so it seems it's a heavy job.
>
> Best regards,
> Dennis van Zoerlandt
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

rsargent
I don't see a mention of java version in play, but if it's java1.6 I
would urge the op to try his hand at using jconsole to examine exactly
what is holding the lion's share of the memory or too see if too many of
something are hanging around unnecessarily.

rjs


On 03/25/2011 02:54 PM, Luis Ferro wrote:

> The major problem i've see in java is the heap memory... It gets
> exausted on very big jobs.
>
> The only alternatives are:
>
> 1. cut the job in smaller chunks...
> 2. move all the environment to 64bits...
>
> The 64bit platform will allow you to allocate more ram to the process
> and surpass the 1.2gb (or something nearby) of the 32 bit platform per
> process.
>
> Cheers,
> LF
>
>
> On 25/03/2011 16:13, Eric Douglas wrote:
>> How to feed the FO file?
>> Using the command line transform you just pass it to the -fo parameter.
>>
>> Using embedded code I feed it as a SAXSource.
>> Reading in from file you have:
>> java.io.File
>> java.io.FileInputStream
>> org.xml.sax.InputSource
>> javax.xml.transform.sax.SAXSource
>>
>> Creates the input.
>> Pass that in to:
>> javax.xml.transform.TransformerFactory
>> javax.xml.transform.Transformer
>> I'm not sure it's required but I copied this code from the FOP website:
>> Transformer.setParameter("versionParam", "2.0")
>>
>> Then your transform, with the FOP output generated with these classes.
>> org.apache.fop.apps.FopFactory
>> org.apache.fop.apps.Fop
>> javax.xml.transform.sax.SAXResult
>>
>> The SAXResult is created from the Fop.getDefaultHandler.
>> The Transformer can be created with an XSL file parameter to pass XML
>> into the transform, or with no XSL to pass the FO in.
>>
>>
>> -----Original Message-----
>> From: Dennis van Zoerlandt [mailto:[hidden email]]
>> Sent: Friday, March 25, 2011 10:30 AM
>> To: [hidden email]
>> Subject: Re: AW: AW: AW: OutOfMemoryException while transforming large
>> XML to PDF
>>
>>
>> Hi Georg,
>>
>> I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it
>> seems Altova doesn't support FOP 1.0).
>>
>> I'm not really sure how to feed the FO file to FOP? Can I just put the
>> FO file as source file for the transformer.transform()?
>>
>> I'll get back to you when I have a FO file. My pc's currently performing
>> very badly, so it seems it's a heavy job.
>>
>> Best regards,
>> Dennis van Zoerlandt
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Kindaian
I also remember from long time ago threads, that the use of references
like indexes and the like make the page sections (term?) not to be
released as there are "live" connections to the object until the very end.

Think that there was some discussion regarding on how to sort this out.

Cheers, and keep us posted on your progress.

:)

p.s.- btw... the very first bug on the bug list is regarding this hehehe


On 25/03/2011 21:04, Rob Sargent wrote:

> I don't see a mention of java version in play, but if it's java1.6 I
> would urge the op to try his hand at using jconsole to examine exactly
> what is holding the lion's share of the memory or too see if too many
> of something are hanging around unnecessarily.
>
> rjs
>
>
> On 03/25/2011 02:54 PM, Luis Ferro wrote:
>> The major problem i've see in java is the heap memory... It gets
>> exausted on very big jobs.
>>
>> The only alternatives are:
>>
>> 1. cut the job in smaller chunks...
>> 2. move all the environment to 64bits...
>>
>> The 64bit platform will allow you to allocate more ram to the process
>> and surpass the 1.2gb (or something nearby) of the 32 bit platform
>> per process.
>>
>> Cheers,
>> LF
>>
>>
>> On 25/03/2011 16:13, Eric Douglas wrote:
>>> How to feed the FO file?
>>> Using the command line transform you just pass it to the -fo parameter.
>>>
>>> Using embedded code I feed it as a SAXSource.
>>> Reading in from file you have:
>>> java.io.File
>>> java.io.FileInputStream
>>> org.xml.sax.InputSource
>>> javax.xml.transform.sax.SAXSource
>>>
>>> Creates the input.
>>> Pass that in to:
>>> javax.xml.transform.TransformerFactory
>>> javax.xml.transform.Transformer
>>> I'm not sure it's required but I copied this code from the FOP website:
>>> Transformer.setParameter("versionParam", "2.0")
>>>
>>> Then your transform, with the FOP output generated with these classes.
>>> org.apache.fop.apps.FopFactory
>>> org.apache.fop.apps.Fop
>>> javax.xml.transform.sax.SAXResult
>>>
>>> The SAXResult is created from the Fop.getDefaultHandler.
>>> The Transformer can be created with an XSL file parameter to pass XML
>>> into the transform, or with no XSL to pass the FO in.
>>>
>>>
>>> -----Original Message-----
>>> From: Dennis van Zoerlandt [mailto:[hidden email]]
>>> Sent: Friday, March 25, 2011 10:30 AM
>>> To: [hidden email]
>>> Subject: Re: AW: AW: AW: OutOfMemoryException while transforming large
>>> XML to PDF
>>>
>>>
>>> Hi Georg,
>>>
>>> I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it
>>> seems Altova doesn't support FOP 1.0).
>>>
>>> I'm not really sure how to feed the FO file to FOP? Can I just put the
>>> FO file as source file for the transformer.transform()?
>>>
>>> I'll get back to you when I have a FO file. My pc's currently
>>> performing
>>> very badly, so it seems it's a heavy job.
>>>
>>> Best regards,
>>> Dennis van Zoerlandt
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [hidden email]
>>> For additional commands, e-mail: [hidden email]
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Georg Datterl
In reply to this post by Dennis van Zoerlandt
Hi Dennis,

F10. Or the button just to the left of the "FO"-Button you are using now. It's labeled "XSL" and performs only the first step. (if the button layout hasn't changed in newer versions...)

If you want to use FOP 1.0, have a look at Tools->Options->XSL. There you can enter a path to XSL-FO transformation engine. Get Fop 1.0 running on your system through the batch file, then enter the path to the batch file here.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de

-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:[hidden email]]
Gesendet: Freitag, 25. März 2011 15:30
An: [hidden email]
Betreff: Re: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it
seems Altova doesn't support FOP 1.0).

I'm not really sure how to feed the FO file to FOP? Can I just put the FO
file as source file for the transformer.transform()?

I'll get back to you when I have a FO file. My pc's currently performing
very badly, so it seems it's a heavy job.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:

>
> Hi Dennis,
>
> OK, we are definitely in the creation phase. And it seems like it's not an
> obvious problem. Could you run your xml and your xslt through a
> transformer (XmlSpy, for example) so we have something to work on? Maybe
> you could even feed the file to fop then? That should result in the same
> Exception and give us something to work on, until the real experts can
> help you.
>
> Mit freundlichen Grüßen
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:[hidden email]]
> Gesendet: Freitag, 25. März 2011 14:09
> An: [hidden email]
> Betreff: Re: AW: AW: OutOfMemoryException while transforming large XML to
> PDF
>
>
> Hi Georg,
>
> Now I understand. Is it correct to say that during the
> transformer.transform() method both the transformation as well as the PDF
> creation is being performed? If not, which part of my code performs the
> creation of the PDF?
>
> Hereby the debug logging before the Exception is thrown:
>
> ---
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
> org.apache.fop.layoutmgr.BlockLayoutManager@35e6e3[fobj=org.apache.fop.fo.flow.Block@be76c7[@id=]]:
> Border Rel
> Side:after -> MinOptMax[min = 566, opt = 566, max = 566]
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
> org.apache.fop.layoutmgr.BlockLayoutManager@c9630a[fobj=org.apache.fop.fo.flow.Block@115126e[@id=]]:
> Space Rel
> Side:before, null-> MinOptMax[min = 14173, opt = 14173, max = 14173]
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker: PLM>
> part: 1, start at pos 0, break at pos 3, break class = ANY
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker:
> addAreas from 0 to 0
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG)
> AbstractPageSequenceLayoutManager: page finished: 26, current num: 26
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AreaTreeHandler: Last
> page-sequence produced 2 pages.
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG)
> PageSequenceLayoutManager:
> Ending layout
> ---
>
> And the first part of the stack trace:
> ---
> [2011-03-25 14:02:56] java.lang.OutOfMemoryError: Java heap space
>         at org.apache.fop.fo.FOText.charIterator(FOText.java:223)
>         at
> org.apache.fop.fo.RecursiveCharIterator.getNextCharIter(RecursiveCharIterator.java:104)
>         at
> org.apache.fop.fo.RecursiveCharIterator.<init>(RecursiveCharIterator.java:62)
>         at
> org.apache.fop.fo.XMLWhiteSpaceHandler.handleWhiteSpace(XMLWhiteSpaceHandler.java:157)
>         at
> org.apache.fop.fo.FObjMixed.handleWhiteSpaceFor(FObjMixed.java:87)
>         at org.apache.fop.fo.FObjMixed.finalizeNode(FObjMixed.java:176)
>         at org.apache.fop.fo.FONode.endOfNode(FONode.java:326)
>         at org.apache.fop.fo.FObjMixed.endOfNode(FObjMixed.java:69)
>         at org.apache.fop.fo.flow.Block.endOfNode(Block.java:148)
>         at
> org.apache.fop.fo.FOTreeBuilder$MainFOHandler.endElement(FOTreeBuilder.java:349)
>         at
> org.apache.fop.fo.FOTreeBuilder.endElement(FOTreeBuilder.java:177)
>         at
> com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
> Source)
>         at
> com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
> Source)
> ---
>
> It's notable that between the last debug message (13:58:24) and the
> exception message (14:02:56) there is more than 3 minutes of no logging.
> The
> only thing I see is that the tenured generation memory pool is being
> filled.
> In less than a minute from the last debug message the full tenured memory
> is
> full. Still, it takes until 14:02:56 to throw an OutOfMemory exception.
>
> I tried your VM settings, but unfortunately without success.
>
> Best regards,
> Dennis van Zoerlandt
>
>
> Georg Datterl wrote:
>>
>> Hi Dennis,
>>
>> There are two steps: XML+XSLT-> FO that's the transformation. Done by
>> saxon or xerces, usually. Anyway, not the main concern on this list.
>> FO->PDF that’s the creation. That's FOPs part and the main concern here.
>> If your process dies during the first step, there's no use in giving you
>> hints how to reduce memory consumption in the second step, obviously. If
>> you are not sure, you could post the last few lines of debug output
>> before
>> the Exception appears. Maybe that can give a hint.
>> As for the JVM settings: I create my larger PDFs with -Xmx2000m -Xincgc
>>
>> Regards,
>>
>> Georg Datterl
>>
>> ------ Kontakt ------
>>
>> Georg Datterl
>>
>> Geneon media solutions gmbh
>> Gutenstetter Straße 8a
>> 90449 Nürnberg
>>
>> HRB Nürnberg: 17193
>> Geschäftsführer: Yong-Harry Steiert
>>
>> Tel.: 0911/36 78 88 - 26
>> Fax: 0911/36 78 88 - 20
>>
>> www.geneon.de
>>
>> Weitere Mitglieder der Willmy MediaGroup:
>>
>> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
>> Willmy PrintMedia GmbH:                      www.willmy.de
>> Willmy Consult & Content GmbH:               www.willmycc.de
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Dennis van Zoerlandt [mailto:[hidden email]]
>> Gesendet: Freitag, 25. März 2011 10:28
>> An: [hidden email]
>> Betreff: Re: AW: OutOfMemoryException while transforming large XML to PDF
>>
>>
>> Hi Georg,
>>
>> As far as I'm understanding it, the transformation is the PDF creation?
>>
>> The image cache is already cleared after each converted file.
>>
>> I'll search the list archive for other options. I was hoping for certain
>> JVM
>> settings which could increase the tenured memory pool size.
>>
>> Best regards,
>> Dennis van Zoerlandt
>>
>>
>> Georg Datterl wrote:
>>>
>>> Hi Dennis,
>>>
>>> Make sure you die during pdf creation, not during transformation.
>>> Keep your page-sequences as short as possible. Starting a new
>>> page-sequence releases memory used by previous page-sequence
>>> If you have many images which are only used once, deactivate image
>>> cache.
>>> This topic is fairly common in this list, so check the archive for
>>> details.
>>>
>>> Regards,
>>>
>>> Georg Datterl
>>>
>>> ------ Kontakt ------
>>>
>>> Georg Datterl
>>>
>>> Geneon media solutions gmbh
>>> Gutenstetter Straße 8a
>>> 90449 Nürnberg
>>>
>>> HRB Nürnberg: 17193
>>> Geschäftsführer: Yong-Harry Steiert
>>>
>>> Tel.: 0911/36 78 88 - 26
>>> Fax: 0911/36 78 88 - 20
>>>
>>> www.geneon.de
>>>
>>> Weitere Mitglieder der Willmy MediaGroup:
>>>
>>> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
>>> Willmy PrintMedia GmbH:                      www.willmy.de
>>> Willmy Consult & Content GmbH:               www.willmycc.de
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Dennis van Zoerlandt [mailto:[hidden email]]
>>> Gesendet: Freitag, 25. März 2011 09:51
>>> An: [hidden email]
>>> Betreff: OutOfMemoryException while transforming large XML to PDF
>>>
>>>
>>> I get a OutOfMemoryException (Java heap space) while transforming a
>>> relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using
>>> these
>>> steps for the transformation:
>>>
>>> ---
>>> FOUserAgent userAgent = fopFactory.newFOUserAgent();
>>> Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent,
>>> out);
>>> TransformerFactory factory = TransformerFactory.newInstance();
>>> Transformer transformer = factory.newTransformer(new
>>> StreamSource(xslFile));
>>> transformer.setParameter("versionParam", "2.0");
>>> Source src = new StreamSource(xmlFile);
>>> Result res = new SAXResult(fop.getDefaultHandler());
>>> transformer.transform(src, res);
>>> ---
>>>
>>> I have tried to increase the initial and maximum heap size (with options
>>> -Xms and -Xmx) at JVM startup but with no success. While transforming,
>>> I'm
>>> monitoring the size of the used and maximum tenured generation memory
>>> pool.
>>> The options don't seem to affect the tenured pool size: this pool is
>>> continuously getting full in 3, 4 minutes which results shortly after
>>> that
>>> in this exception.
>>>
>>> What are my options to prevent the OutOfMemoryException?
>>>
>>> Best regards,
>>> Dennis van Zoerlandt
>>> --
>>> View this message in context:
>>> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
>>> Sent from the FOP - Users mailing list archive at Nabble.com.
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [hidden email]
>>> For additional commands, e-mail: [hidden email]
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [hidden email]
>>> For additional commands, e-mail: [hidden email]
>>>
>>>
>>>
>>
>> --
>> View this message in context:
>> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236224.html
>> Sent from the FOP - Users mailing list archive at Nabble.com.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>
>>
>
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31237755.html
> Sent from the FOP - Users mailing list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31238428.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
Hi,

In the meanwhile I have tested a few things. In the attachment you'll find a FO file (fop1.0-5000-fo.zip) which has scrambled data because of confidentiality.

I created the FO file with XMLspy and tried to create a PDF file with Apache FOP 1.0 (fop.bat) on my Windows XP workstation. It produced (what it seems) this error (see below). No PDF file was created.

---
31-mrt-2011 14:24:02 org.apache.fop.events.LoggingEventListener processEvent
SEVERE: Image not found. URI: file:/image.bmp. (See position 5:918)
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
        at org.apache.fop.fo.StaticPropertyList.<init>(StaticPropertyList.java:37)
        at org.apache.fop.fo.FOTreeBuilder$1.make(FOTreeBuilder.java:110)
        at org.apache.fop.fo.FObj.createPropertyList(FObj.java:133)
        at org.apache.fop.fo.FOTreeBuilder$MainFOHandler.startElement(FOTreeBuilder.java:280)
        at org.apache.fop.fo.FOTreeBuilder.startElement(FOTreeBuilder.java:171)
        at org.apache.xalan.transformer.TransformerIdentityImpl.startElement(TransformerIdentityImpl.java:1072)
        at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)
        at org.apache.xerces.xinclude.XIncludeHandler.startElement(Unknown Source)
        at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source)
        at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
        at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
        at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
        at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
        at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
        at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
        at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
        at org.apache.xalan.transformer.TransformerIdentityImpl.transform(TransformerIdentityImpl.java:484)
        at org.apache.fop.cli.InputHandler.transformTo(InputHandler.java:299)
        at org.apache.fop.cli.InputHandler.renderTo(InputHandler.java:130)
        at org.apache.fop.cli.Main.startFOP(Main.java:174)
        at org.apache.fop.cli.Main.main(Main.java:205)
--

Best regards,
Dennis van Zoerlandt

Georg Datterl wrote
Hi Dennis,

F10. Or the button just to the left of the "FO"-Button you are using now. It's labeled "XSL" and performs only the first step. (if the button layout hasn't changed in newer versions...)

If you want to use FOP 1.0, have a look at Tools->Options->XSL. There you can enter a path to XSL-FO transformation engine. Get Fop 1.0 running on your system through the batch file, then enter the path to the batch file here.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de

-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
Gesendet: Freitag, 25. März 2011 15:30
An: fop-users@xmlgraphics.apache.org
Betreff: Re: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it
seems Altova doesn't support FOP 1.0).

I'm not really sure how to feed the FO file to FOP? Can I just put the FO
file as source file for the transformer.transform()?

I'll get back to you when I have a FO file. My pc's currently performing
very badly, so it seems it's a heavy job.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:
>
> Hi Dennis,
>
> OK, we are definitely in the creation phase. And it seems like it's not an
> obvious problem. Could you run your xml and your xslt through a
> transformer (XmlSpy, for example) so we have something to work on? Maybe
> you could even feed the file to fop then? That should result in the same
> Exception and give us something to work on, until the real experts can
> help you.
>
> Mit freundlichen Grüßen
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
> Gesendet: Freitag, 25. März 2011 14:09
> An: fop-users@xmlgraphics.apache.org
> Betreff: Re: AW: AW: OutOfMemoryException while transforming large XML to
> PDF
>
>
> Hi Georg,
>
> Now I understand. Is it correct to say that during the
> transformer.transform() method both the transformation as well as the PDF
> creation is being performed? If not, which part of my code performs the
> creation of the PDF?
>
> Hereby the debug logging before the Exception is thrown:
>
> ---
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
> org.apache.fop.layoutmgr.BlockLayoutManager@35e6e3[fobj=org.apache.fop.fo.flow.Block@be76c7[@id=]]:
> Border Rel
> Side:after -> MinOptMax[min = 566, opt = 566, max = 566]
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) BlockLayoutManager:
> org.apache.fop.layoutmgr.BlockLayoutManager@c9630a[fobj=org.apache.fop.fo.flow.Block@115126e[@id=]]:
> Space Rel
> Side:before, null-> MinOptMax[min = 14173, opt = 14173, max = 14173]
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker: PLM>
> part: 1, start at pos 0, break at pos 3, break class = ANY
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AbstractBreaker:
> addAreas from 0 to 0
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) IDTracker:
> signalIDProcessed()
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG)
> AbstractPageSequenceLayoutManager: page finished: 26, current num: 26
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG) AreaTreeHandler: Last
> page-sequence produced 2 pages.
> [2011-03-25 13:58:24 thread-fileprinter1] (DEBUG)
> PageSequenceLayoutManager:
> Ending layout
> ---
>
> And the first part of the stack trace:
> ---
> [2011-03-25 14:02:56] java.lang.OutOfMemoryError: Java heap space
>         at org.apache.fop.fo.FOText.charIterator(FOText.java:223)
>         at
> org.apache.fop.fo.RecursiveCharIterator.getNextCharIter(RecursiveCharIterator.java:104)
>         at
> org.apache.fop.fo.RecursiveCharIterator.<init>(RecursiveCharIterator.java:62)
>         at
> org.apache.fop.fo.XMLWhiteSpaceHandler.handleWhiteSpace(XMLWhiteSpaceHandler.java:157)
>         at
> org.apache.fop.fo.FObjMixed.handleWhiteSpaceFor(FObjMixed.java:87)
>         at org.apache.fop.fo.FObjMixed.finalizeNode(FObjMixed.java:176)
>         at org.apache.fop.fo.FONode.endOfNode(FONode.java:326)
>         at org.apache.fop.fo.FObjMixed.endOfNode(FObjMixed.java:69)
>         at org.apache.fop.fo.flow.Block.endOfNode(Block.java:148)
>         at
> org.apache.fop.fo.FOTreeBuilder$MainFOHandler.endElement(FOTreeBuilder.java:349)
>         at
> org.apache.fop.fo.FOTreeBuilder.endElement(FOTreeBuilder.java:177)
>         at
> com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
> Source)
>         at
> com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.endElement(Unknown
> Source)
> ---
>
> It's notable that between the last debug message (13:58:24) and the
> exception message (14:02:56) there is more than 3 minutes of no logging.
> The
> only thing I see is that the tenured generation memory pool is being
> filled.
> In less than a minute from the last debug message the full tenured memory
> is
> full. Still, it takes until 14:02:56 to throw an OutOfMemory exception.
>
> I tried your VM settings, but unfortunately without success.
>
> Best regards,
> Dennis van Zoerlandt
>
>
> Georg Datterl wrote:
>>
>> Hi Dennis,
>>
>> There are two steps: XML+XSLT-> FO that's the transformation. Done by
>> saxon or xerces, usually. Anyway, not the main concern on this list.
>> FO->PDF that’s the creation. That's FOPs part and the main concern here.
>> If your process dies during the first step, there's no use in giving you
>> hints how to reduce memory consumption in the second step, obviously. If
>> you are not sure, you could post the last few lines of debug output
>> before
>> the Exception appears. Maybe that can give a hint.
>> As for the JVM settings: I create my larger PDFs with -Xmx2000m -Xincgc
>>
>> Regards,
>>
>> Georg Datterl
>>
>> ------ Kontakt ------
>>
>> Georg Datterl
>>
>> Geneon media solutions gmbh
>> Gutenstetter Straße 8a
>> 90449 Nürnberg
>>
>> HRB Nürnberg: 17193
>> Geschäftsführer: Yong-Harry Steiert
>>
>> Tel.: 0911/36 78 88 - 26
>> Fax: 0911/36 78 88 - 20
>>
>> www.geneon.de
>>
>> Weitere Mitglieder der Willmy MediaGroup:
>>
>> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
>> Willmy PrintMedia GmbH:                      www.willmy.de
>> Willmy Consult & Content GmbH:               www.willmycc.de
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
>> Gesendet: Freitag, 25. März 2011 10:28
>> An: fop-users@xmlgraphics.apache.org
>> Betreff: Re: AW: OutOfMemoryException while transforming large XML to PDF
>>
>>
>> Hi Georg,
>>
>> As far as I'm understanding it, the transformation is the PDF creation?
>>
>> The image cache is already cleared after each converted file.
>>
>> I'll search the list archive for other options. I was hoping for certain
>> JVM
>> settings which could increase the tenured memory pool size.
>>
>> Best regards,
>> Dennis van Zoerlandt
>>
>>
>> Georg Datterl wrote:
>>>
>>> Hi Dennis,
>>>
>>> Make sure you die during pdf creation, not during transformation.
>>> Keep your page-sequences as short as possible. Starting a new
>>> page-sequence releases memory used by previous page-sequence
>>> If you have many images which are only used once, deactivate image
>>> cache.
>>> This topic is fairly common in this list, so check the archive for
>>> details.
>>>
>>> Regards,
>>>
>>> Georg Datterl
>>>
>>> ------ Kontakt ------
>>>
>>> Georg Datterl
>>>
>>> Geneon media solutions gmbh
>>> Gutenstetter Straße 8a
>>> 90449 Nürnberg
>>>
>>> HRB Nürnberg: 17193
>>> Geschäftsführer: Yong-Harry Steiert
>>>
>>> Tel.: 0911/36 78 88 - 26
>>> Fax: 0911/36 78 88 - 20
>>>
>>> www.geneon.de
>>>
>>> Weitere Mitglieder der Willmy MediaGroup:
>>>
>>> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
>>> Willmy PrintMedia GmbH:                      www.willmy.de
>>> Willmy Consult & Content GmbH:               www.willmycc.de
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
>>> Gesendet: Freitag, 25. März 2011 09:51
>>> An: fop-users@xmlgraphics.apache.org
>>> Betreff: OutOfMemoryException while transforming large XML to PDF
>>>
>>>
>>> I get a OutOfMemoryException (Java heap space) while transforming a
>>> relatively large XML (10 MB) with a XSL-FO to a PDF file. I'm using
>>> these
>>> steps for the transformation:
>>>
>>> ---
>>> FOUserAgent userAgent = fopFactory.newFOUserAgent();
>>> Fop fop = this.fopFactory.newFop(MimeConstants.MIME_PDF, userAgent,
>>> out);
>>> TransformerFactory factory = TransformerFactory.newInstance();
>>> Transformer transformer = factory.newTransformer(new
>>> StreamSource(xslFile));
>>> transformer.setParameter("versionParam", "2.0");
>>> Source src = new StreamSource(xmlFile);
>>> Result res = new SAXResult(fop.getDefaultHandler());
>>> transformer.transform(src, res);
>>> ---
>>>
>>> I have tried to increase the initial and maximum heap size (with options
>>> -Xms and -Xmx) at JVM startup but with no success. While transforming,
>>> I'm
>>> monitoring the size of the used and maximum tenured generation memory
>>> pool.
>>> The options don't seem to affect the tenured pool size: this pool is
>>> continuously getting full in 3, 4 minutes which results shortly after
>>> that
>>> in this exception.
>>>
>>> What are my options to prevent the OutOfMemoryException?
>>>
>>> Best regards,
>>> Dennis van Zoerlandt
>>> --
>>> View this message in context:
>>> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236044.html
>>> Sent from the FOP - Users mailing list archive at Nabble.com.
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
>>> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
>>> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>>>
>>>
>>>
>>
>> --
>> View this message in context:
>> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31236224.html
>> Sent from the FOP - Users mailing list archive at Nabble.com.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
>> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>>
>>
>>
>
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31237755.html
> Sent from the FOP - Users mailing list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31238428.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
In reply to this post by Kindaian
Hi,

Splitting the XML input file into several chunks is not a preferable option for me, nevertheless it is a valid one. Moving complete system environments to x64 is also an option, but also unrealistic in this specific case as it would mean I have to break compatibility with x86 systems.

I'm using indeed Java 1.6, I'll consider the use of jconsole to monitor the memory usage parts.

Best regards,
Dennis van Zoerlandt

Kindaian wrote
I also remember from long time ago threads, that the use of references
like indexes and the like make the page sections (term?) not to be
released as there are "live" connections to the object until the very end.

Think that there was some discussion regarding on how to sort this out.

Cheers, and keep us posted on your progress.

:)

p.s.- btw... the very first bug on the bug list is regarding this hehehe


On 25/03/2011 21:04, Rob Sargent wrote:
> I don't see a mention of java version in play, but if it's java1.6 I
> would urge the op to try his hand at using jconsole to examine exactly
> what is holding the lion's share of the memory or too see if too many
> of something are hanging around unnecessarily.
>
> rjs
>
>
> On 03/25/2011 02:54 PM, Luis Ferro wrote:
>> The major problem i've see in java is the heap memory... It gets
>> exausted on very big jobs.
>>
>> The only alternatives are:
>>
>> 1. cut the job in smaller chunks...
>> 2. move all the environment to 64bits...
>>
>> The 64bit platform will allow you to allocate more ram to the process
>> and surpass the 1.2gb (or something nearby) of the 32 bit platform
>> per process.
>>
>> Cheers,
>> LF
>>
>>
>> On 25/03/2011 16:13, Eric Douglas wrote:
>>> How to feed the FO file?
>>> Using the command line transform you just pass it to the -fo parameter.
>>>
>>> Using embedded code I feed it as a SAXSource.
>>> Reading in from file you have:
>>> java.io.File
>>> java.io.FileInputStream
>>> org.xml.sax.InputSource
>>> javax.xml.transform.sax.SAXSource
>>>
>>> Creates the input.
>>> Pass that in to:
>>> javax.xml.transform.TransformerFactory
>>> javax.xml.transform.Transformer
>>> I'm not sure it's required but I copied this code from the FOP website:
>>> Transformer.setParameter("versionParam", "2.0")
>>>
>>> Then your transform, with the FOP output generated with these classes.
>>> org.apache.fop.apps.FopFactory
>>> org.apache.fop.apps.Fop
>>> javax.xml.transform.sax.SAXResult
>>>
>>> The SAXResult is created from the Fop.getDefaultHandler.
>>> The Transformer can be created with an XSL file parameter to pass XML
>>> into the transform, or with no XSL to pass the FO in.
>>>
>>>
>>> -----Original Message-----
>>> From: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
>>> Sent: Friday, March 25, 2011 10:30 AM
>>> To: fop-users@xmlgraphics.apache.org
>>> Subject: Re: AW: AW: AW: OutOfMemoryException while transforming large
>>> XML to PDF
>>>
>>>
>>> Hi Georg,
>>>
>>> I'm currently running the XML and XSLT through XMLspy with FOP 0.95 (it
>>> seems Altova doesn't support FOP 1.0).
>>>
>>> I'm not really sure how to feed the FO file to FOP? Can I just put the
>>> FO file as source file for the transformer.transform()?
>>>
>>> I'll get back to you when I have a FO file. My pc's currently
>>> performing
>>> very badly, so it seems it's a heavy job.
>>>
>>> Best regards,
>>> Dennis van Zoerlandt
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
>>> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
>> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Andreas L. Delmelle
In reply to this post by Dennis van Zoerlandt
On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:

Hi Dennis

> In the meanwhile I have tested a few things. In the attachment you'll find a
> FO file ( http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
> fop1.0-5000-fo.zip ) which has scrambled data because of confidentiality.
>
> I created the FO file with XMLspy and tried to create a PDF file with Apache
> FOP 1.0 (fop.bat) on my Windows XP workstation. It produced (what it seems)
> this error (see below). No PDF file was created.

It seems like the classic "cram all content into one page-sequence" issue.
With a file of that size, there is little or nothing you can do. The current architecture of FOP does not allow to render such documents without a sufficiently large heap.

That said: I wrote the above while I was running your sample file (with FOP Trunk, using Saxon as XSLT/JAXP implementation), and it just completed on my end, with a heap of 1GB. It did take about 7 minutes, but still... I got a nice output file of 455 pages.
I doubt that it is related to images, as there is only one fo:external-graphic.
Do you have font auto-detection enabled, by any chance? That might consume an unnecessary amount of heap space, for example, if you only actually use a handful of custom fonts, but have a large number of those installed on your system.
Another option is that some fixes for memory-leaks, applied to Trunk after the 1.0 release, are actually helping here.

> Splitting the XML input file into several chunks is not a preferable option
> for me, nevertheless it is a valid one.

Note: it is, strictly speaking, not necessary to split up the input so that you have several FOs. What would suffice is to modify the stylesheet, so that the content is divided over multiple page-sequences. If you can keep the size of the page-sequences down to, say, 30 to 40 pages, that might already reduce the overall memory usage significantly.
There are known cases of people rendering documents of +10.000 pages. No problem, iff not all of those pages are generated by the same fo:page-sequence.


Regards

Andreas
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
Hi Andreas,

Alright, it seems a logical explanation you need a large heap to produce this kind of large documents.

Font auto detection seems to be off. In the FOP configuration file no auto-detect flag is present and I also didn't include a manifest file with x-fonts.

I will look further into modifying the XSL file in a such way multiple page-sequences are used. I think it's the best solution this far. Am I correct to say multiple page-sequences won't affect the definitive page lay-out of the PDF file? How can I split up the content in multiple page-sequences? I think there's also a modification necessary in the XML input file?

Another question: is there a reliable way to 'predict' or calculate the page count the PDF file will have, before any transformation is started? I can check the file size of the XML input file, but that isn't really reliable because the complexity of the XSL stylesheet is also a factor. I'm thinking of aborting the task when the resulting PDF file will have 100+ pages (for instance). Is this possible?

Best regards,
Dennis van Zoerlandt

Andreas Delmelle-2 wrote
On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:

Hi Dennis

> In the meanwhile I have tested a few things. In the attachment you'll find a
> FO file ( http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
> fop1.0-5000-fo.zip ) which has scrambled data because of confidentiality.
>
> I created the FO file with XMLspy and tried to create a PDF file with Apache
> FOP 1.0 (fop.bat) on my Windows XP workstation. It produced (what it seems)
> this error (see below). No PDF file was created.

It seems like the classic "cram all content into one page-sequence" issue.
With a file of that size, there is little or nothing you can do. The current architecture of FOP does not allow to render such documents without a sufficiently large heap.

That said: I wrote the above while I was running your sample file (with FOP Trunk, using Saxon as XSLT/JAXP implementation), and it just completed on my end, with a heap of 1GB. It did take about 7 minutes, but still... I got a nice output file of 455 pages.
I doubt that it is related to images, as there is only one fo:external-graphic.
Do you have font auto-detection enabled, by any chance? That might consume an unnecessary amount of heap space, for example, if you only actually use a handful of custom fonts, but have a large number of those installed on your system.
Another option is that some fixes for memory-leaks, applied to Trunk after the 1.0 release, are actually helping here.

> Splitting the XML input file into several chunks is not a preferable option
> for me, nevertheless it is a valid one.

Note: it is, strictly speaking, not necessary to split up the input so that you have several FOs. What would suffice is to modify the stylesheet, so that the content is divided over multiple page-sequences. If you can keep the size of the page-sequences down to, say, 30 to 40 pages, that might already reduce the overall memory usage significantly.
There are known cases of people rendering documents of +10.000 pages. No problem, iff not all of those pages are generated by the same fo:page-sequence.


Regards

Andreas
---
---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Georg Datterl
Hi Dennis,

Page-sequences start with a new page. If you start a new page-sequence instead of inserting a fixed page break, the layout does not change, as far as I can tell.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:[hidden email]]
Gesendet: Freitag, 1. April 2011 13:13
An: [hidden email]
Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Andreas,

Alright, it seems a logical explanation you need a large heap to produce
this kind of large documents.

Font auto detection seems to be off. In the FOP configuration file no
auto-detect flag is present and I also didn't include a manifest file with
x-fonts.

I will look further into modifying the XSL file in a such way multiple
page-sequences are used. I think it's the best solution this far. Am I
correct to say multiple page-sequences won't affect the definitive page
lay-out of the PDF file? How can I split up the content in multiple
page-sequences? I think there's also a modification necessary in the XML
input file?

Another question: is there a reliable way to 'predict' or calculate the page
count the PDF file will have, before any transformation is started? I can
check the file size of the XML input file, but that isn't really reliable
because the complexity of the XSL stylesheet is also a factor. I'm thinking
of aborting the task when the resulting PDF file will have 100+ pages (for
instance). Is this possible?

Best regards,
Dennis van Zoerlandt


Andreas Delmelle-2 wrote:

>
> On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:
>
> Hi Dennis
>
>> In the meanwhile I have tested a few things. In the attachment you'll
>> find a
>> FO file ( http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
>> fop1.0-5000-fo.zip ) which has scrambled data because of confidentiality.
>>
>> I created the FO file with XMLspy and tried to create a PDF file with
>> Apache
>> FOP 1.0 (fop.bat) on my Windows XP workstation. It produced (what it
>> seems)
>> this error (see below). No PDF file was created.
>
> It seems like the classic "cram all content into one page-sequence" issue.
> With a file of that size, there is little or nothing you can do. The
> current architecture of FOP does not allow to render such documents
> without a sufficiently large heap.
>
> That said: I wrote the above while I was running your sample file (with
> FOP Trunk, using Saxon as XSLT/JAXP implementation), and it just completed
> on my end, with a heap of 1GB. It did take about 7 minutes, but still... I
> got a nice output file of 455 pages.
> I doubt that it is related to images, as there is only one
> fo:external-graphic.
> Do you have font auto-detection enabled, by any chance? That might consume
> an unnecessary amount of heap space, for example, if you only actually use
> a handful of custom fonts, but have a large number of those installed on
> your system.
> Another option is that some fixes for memory-leaks, applied to Trunk after
> the 1.0 release, are actually helping here.
>
>> Splitting the XML input file into several chunks is not a preferable
>> option
>> for me, nevertheless it is a valid one.
>
> Note: it is, strictly speaking, not necessary to split up the input so
> that you have several FOs. What would suffice is to modify the stylesheet,
> so that the content is divided over multiple page-sequences. If you can
> keep the size of the page-sequences down to, say, 30 to 40 pages, that
> might already reduce the overall memory usage significantly.
> There are known cases of people rendering documents of +10.000 pages. No
> problem, iff not all of those pages are generated by the same
> fo:page-sequence.
>
>
> Regards
>
> Andreas
> ---
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31293232.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Dennis van Zoerlandt
Hi Georg,

At this moment we don't use fixed page breaks, just 1 page-sequence. The stylesheet files are generated with Digiforms Designer.

Best regards,
Dennis van Zoerlandt

Georg Datterl wrote
Hi Dennis,

Page-sequences start with a new page. If you start a new page-sequence instead of inserting a fixed page break, the layout does not change, as far as I can tell.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Dennis van Zoerlandt [mailto:dvzoerlandt@vanboxtel.nl]
Gesendet: Freitag, 1. April 2011 13:13
An: fop-users@xmlgraphics.apache.org
Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Andreas,

Alright, it seems a logical explanation you need a large heap to produce
this kind of large documents.

Font auto detection seems to be off. In the FOP configuration file no
auto-detect flag is present and I also didn't include a manifest file with
x-fonts.

I will look further into modifying the XSL file in a such way multiple
page-sequences are used. I think it's the best solution this far. Am I
correct to say multiple page-sequences won't affect the definitive page
lay-out of the PDF file? How can I split up the content in multiple
page-sequences? I think there's also a modification necessary in the XML
input file?

Another question: is there a reliable way to 'predict' or calculate the page
count the PDF file will have, before any transformation is started? I can
check the file size of the XML input file, but that isn't really reliable
because the complexity of the XSL stylesheet is also a factor. I'm thinking
of aborting the task when the resulting PDF file will have 100+ pages (for
instance). Is this possible?

Best regards,
Dennis van Zoerlandt


Andreas Delmelle-2 wrote:
>
> On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:
>
> Hi Dennis
>
>> In the meanwhile I have tested a few things. In the attachment you'll
>> find a
>> FO file ( http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
>> fop1.0-5000-fo.zip ) which has scrambled data because of confidentiality.
>>
>> I created the FO file with XMLspy and tried to create a PDF file with
>> Apache
>> FOP 1.0 (fop.bat) on my Windows XP workstation. It produced (what it
>> seems)
>> this error (see below). No PDF file was created.
>
> It seems like the classic "cram all content into one page-sequence" issue.
> With a file of that size, there is little or nothing you can do. The
> current architecture of FOP does not allow to render such documents
> without a sufficiently large heap.
>
> That said: I wrote the above while I was running your sample file (with
> FOP Trunk, using Saxon as XSLT/JAXP implementation), and it just completed
> on my end, with a heap of 1GB. It did take about 7 minutes, but still... I
> got a nice output file of 455 pages.
> I doubt that it is related to images, as there is only one
> fo:external-graphic.
> Do you have font auto-detection enabled, by any chance? That might consume
> an unnecessary amount of heap space, for example, if you only actually use
> a handful of custom fonts, but have a large number of those installed on
> your system.
> Another option is that some fixes for memory-leaks, applied to Trunk after
> the 1.0 release, are actually helping here.
>
>> Splitting the XML input file into several chunks is not a preferable
>> option
>> for me, nevertheless it is a valid one.
>
> Note: it is, strictly speaking, not necessary to split up the input so
> that you have several FOs. What would suffice is to modify the stylesheet,
> so that the content is divided over multiple page-sequences. If you can
> keep the size of the page-sequences down to, say, 30 to 40 pages, that
> might already reduce the overall memory usage significantly.
> There are known cases of people rendering documents of +10.000 pages. No
> problem, iff not all of those pages are generated by the same
> fo:page-sequence.
>
>
> Regards
>
> Andreas
> ---
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
> For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31293232.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Eric Douglas
I currently only have one fo:page-sequence tag in my xsl.
How would auto page numbering with fo:page-number work otherwise?

Is it possible the memory requirements could be reduced for extremely large documents by adding an option to swap some values out to temp files?  Maybe save information in a file for each 100 pages?


-----Original Message-----
From: Dennis van Zoerlandt [mailto:[hidden email]]
Sent: Friday, April 01, 2011 10:25 AM
To: [hidden email]
Subject: Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

At this moment we don't use fixed page breaks, just 1 page-sequence. The stylesheet files are generated with Digiforms Designer.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:

>
> Hi Dennis,
>
> Page-sequences start with a new page. If you start a new page-sequence
> instead of inserting a fixed page break, the layout does not change,
> as far as I can tell.
>
> Regards,
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:[hidden email]]
> Gesendet: Freitag, 1. April 2011 13:13
> An: [hidden email]
> Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming
> large XML to PDF
>
>
> Hi Andreas,
>
> Alright, it seems a logical explanation you need a large heap to
> produce this kind of large documents.
>
> Font auto detection seems to be off. In the FOP configuration file no
> auto-detect flag is present and I also didn't include a manifest file
> with x-fonts.
>
> I will look further into modifying the XSL file in a such way multiple
> page-sequences are used. I think it's the best solution this far. Am I
> correct to say multiple page-sequences won't affect the definitive
> page lay-out of the PDF file? How can I split up the content in
> multiple page-sequences? I think there's also a modification necessary
> in the XML input file?
>
> Another question: is there a reliable way to 'predict' or calculate
> the page count the PDF file will have, before any transformation is
> started? I can check the file size of the XML input file, but that
> isn't really reliable because the complexity of the XSL stylesheet is
> also a factor. I'm thinking of aborting the task when the resulting
> PDF file will have 100+ pages (for instance). Is this possible?
>
> Best regards,
> Dennis van Zoerlandt
>
>
> Andreas Delmelle-2 wrote:
>>
>> On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:
>>
>> Hi Dennis
>>
>>> In the meanwhile I have tested a few things. In the attachment
>>> you'll find a FO file (
>>> http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
>>> fop1.0-5000-fo.zip ) which has scrambled data because of
>>> confidentiality.
>>>
>>> I created the FO file with XMLspy and tried to create a PDF file
>>> with Apache FOP 1.0 (fop.bat) on my Windows XP workstation. It
>>> produced (what it
>>> seems)
>>> this error (see below). No PDF file was created.
>>
>> It seems like the classic "cram all content into one page-sequence"
>> issue.
>> With a file of that size, there is little or nothing you can do. The
>> current architecture of FOP does not allow to render such documents
>> without a sufficiently large heap.
>>
>> That said: I wrote the above while I was running your sample file
>> (with FOP Trunk, using Saxon as XSLT/JAXP implementation), and it
>> just completed on my end, with a heap of 1GB. It did take about 7
>> minutes, but still...
>> I
>> got a nice output file of 455 pages.
>> I doubt that it is related to images, as there is only one
>> fo:external-graphic.
>> Do you have font auto-detection enabled, by any chance? That might
>> consume an unnecessary amount of heap space, for example, if you only
>> actually use a handful of custom fonts, but have a large number of
>> those installed on your system.
>> Another option is that some fixes for memory-leaks, applied to Trunk
>> after the 1.0 release, are actually helping here.
>>
>>> Splitting the XML input file into several chunks is not a preferable
>>> option for me, nevertheless it is a valid one.
>>
>> Note: it is, strictly speaking, not necessary to split up the input
>> so that you have several FOs. What would suffice is to modify the
>> stylesheet, so that the content is divided over multiple
>> page-sequences. If you can keep the size of the page-sequences down
>> to, say, 30 to 40 pages, that might already reduce the overall memory
>> usage significantly.
>> There are known cases of people rendering documents of +10.000 pages.
>> No problem, iff not all of those pages are generated by the same
>> fo:page-sequence.
>>
>>
>> Regards
>>
>> Andreas
>> ---
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail:
>> [hidden email]
>>
>>
>>
>
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XM
> L-to-PDF-tp31236044p31293232.html Sent from the FOP - Users mailing
> list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31295412.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

AW: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

Georg Datterl
Hi Eric,

I don't think page numbering is depending on page-sequence.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                      www.willmy.de
Willmy Consult & Content GmbH:               www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Eric Douglas [mailto:[hidden email]]
Gesendet: Freitag, 1. April 2011 16:47
An: [hidden email]
Betreff: RE: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

I currently only have one fo:page-sequence tag in my xsl.
How would auto page numbering with fo:page-number work otherwise?

Is it possible the memory requirements could be reduced for extremely large documents by adding an option to swap some values out to temp files?  Maybe save information in a file for each 100 pages?


-----Original Message-----
From: Dennis van Zoerlandt [mailto:[hidden email]]
Sent: Friday, April 01, 2011 10:25 AM
To: [hidden email]
Subject: Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF


Hi Georg,

At this moment we don't use fixed page breaks, just 1 page-sequence. The stylesheet files are generated with Digiforms Designer.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:

>
> Hi Dennis,
>
> Page-sequences start with a new page. If you start a new page-sequence
> instead of inserting a fixed page break, the layout does not change,
> as far as I can tell.
>
> Regards,
>
> Georg Datterl
>
> ------ Kontakt ------
>
> Georg Datterl
>
> Geneon media solutions gmbh
> Gutenstetter Straße 8a
> 90449 Nürnberg
>
> HRB Nürnberg: 17193
> Geschäftsführer: Yong-Harry Steiert
>
> Tel.: 0911/36 78 88 - 26
> Fax: 0911/36 78 88 - 20
>
> www.geneon.de
>
> Weitere Mitglieder der Willmy MediaGroup:
>
> IRS Integrated Realization Services GmbH:    www.irs-nbg.de
> Willmy PrintMedia GmbH:                      www.willmy.de
> Willmy Consult & Content GmbH:               www.willmycc.de
>
>
> -----Ursprüngliche Nachricht-----
> Von: Dennis van Zoerlandt [mailto:[hidden email]]
> Gesendet: Freitag, 1. April 2011 13:13
> An: [hidden email]
> Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming
> large XML to PDF
>
>
> Hi Andreas,
>
> Alright, it seems a logical explanation you need a large heap to
> produce this kind of large documents.
>
> Font auto detection seems to be off. In the FOP configuration file no
> auto-detect flag is present and I also didn't include a manifest file
> with x-fonts.
>
> I will look further into modifying the XSL file in a such way multiple
> page-sequences are used. I think it's the best solution this far. Am I
> correct to say multiple page-sequences won't affect the definitive
> page lay-out of the PDF file? How can I split up the content in
> multiple page-sequences? I think there's also a modification necessary
> in the XML input file?
>
> Another question: is there a reliable way to 'predict' or calculate
> the page count the PDF file will have, before any transformation is
> started? I can check the file size of the XML input file, but that
> isn't really reliable because the complexity of the XSL stylesheet is
> also a factor. I'm thinking of aborting the task when the resulting
> PDF file will have 100+ pages (for instance). Is this possible?
>
> Best regards,
> Dennis van Zoerlandt
>
>
> Andreas Delmelle-2 wrote:
>>
>> On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:
>>
>> Hi Dennis
>>
>>> In the meanwhile I have tested a few things. In the attachment
>>> you'll find a FO file (
>>> http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
>>> fop1.0-5000-fo.zip ) which has scrambled data because of
>>> confidentiality.
>>>
>>> I created the FO file with XMLspy and tried to create a PDF file
>>> with Apache FOP 1.0 (fop.bat) on my Windows XP workstation. It
>>> produced (what it
>>> seems)
>>> this error (see below). No PDF file was created.
>>
>> It seems like the classic "cram all content into one page-sequence"
>> issue.
>> With a file of that size, there is little or nothing you can do. The
>> current architecture of FOP does not allow to render such documents
>> without a sufficiently large heap.
>>
>> That said: I wrote the above while I was running your sample file
>> (with FOP Trunk, using Saxon as XSLT/JAXP implementation), and it
>> just completed on my end, with a heap of 1GB. It did take about 7
>> minutes, but still...
>> I
>> got a nice output file of 455 pages.
>> I doubt that it is related to images, as there is only one
>> fo:external-graphic.
>> Do you have font auto-detection enabled, by any chance? That might
>> consume an unnecessary amount of heap space, for example, if you only
>> actually use a handful of custom fonts, but have a large number of
>> those installed on your system.
>> Another option is that some fixes for memory-leaks, applied to Trunk
>> after the 1.0 release, are actually helping here.
>>
>>> Splitting the XML input file into several chunks is not a preferable
>>> option for me, nevertheless it is a valid one.
>>
>> Note: it is, strictly speaking, not necessary to split up the input
>> so that you have several FOs. What would suffice is to modify the
>> stylesheet, so that the content is divided over multiple
>> page-sequences. If you can keep the size of the page-sequences down
>> to, say, 30 to 40 pages, that might already reduce the overall memory
>> usage significantly.
>> There are known cases of people rendering documents of +10.000 pages.
>> No problem, iff not all of those pages are generated by the same
>> fo:page-sequence.
>>
>>
>> Regards
>>
>> Andreas
>> ---
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail:
>> [hidden email]
>>
>>
>>
>
> --
> View this message in context:
> http://old.nabble.com/OutOfMemoryException-while-transforming-large-XM
> L-to-PDF-tp31236044p31293232.html Sent from the FOP - Users mailing
> list archive at Nabble.com.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>

--
View this message in context: http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31295412.html
Sent from the FOP - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

12
Loading...