I love dBlog!
Immagine
 Gentes... di Admin
 
"
Poiche' un politico non crede mai in quello che dice, quando viene preso alla lettera rimane sempre molto sorpreso.

Charles De Gaulle
"
 
Di seguito gli interventi pubblicati in questa sezione, in ordine cronologico.
 
 

Log4J exception FAQ: "How do I print the stack trace of an exception using Log4J or Commons Logging?"

Printing the stack trace of a Log4J exception seems to be something of a trick question. In reviewing Java code from different developers at different organizations I see a lot of people working very hard to print a stack trace using Log4J, including a lot of variations of calling e.printStackTrace() method.

Log4J exception stack trace - short answer

The short answer is that all you have to do to print the stack trace of an exception using Java and Log4J (or the Apache Commons Logging project) is this:

log.error("Your description here", exception); 

where exception is your Java Exception object. The Log4j error method that takes a description followed by a Throwable will print the stack trace of the Java Throwable object.

For more information on how this works with Log4J I recommend looking at the API documentation for the Logger class.

 
Di Muso (del 31/05/2011 @ 09:56:17, in Informatica, linkato 2908 volte)
The work around is posted in the following link
http://www.hildeberto.com/2008/05/hibernate-and-jersey-conflict-on.html

..................

The ASM package is needed by the cglib package, which is part of the Hibernate libraries. If we remove that package, Jersey will work correctly, but Hibernate will stop working. To solve this conflict use cglib-nodep.jar instead of cglib.jar and keep ASM version 3.x with Jersey. cglib-nodep.jar includes some ASM classes demanded by cglib.jar, changing the package name to avoid any class conflict.

The ASM library is a Java bytecode manipulation and analysis framework. According to its website "(...) it can be used to modify existing classes or dynamically generate classes, directly in binary form. Provides common transformation and analysis algorithms to easily assemble custom complex transformations and code analysis tools". ASM is used by many products like AspectJ, Oracle TopLink, JRuby, and many others. It can not be simply ignored by frameworks because it is a matter of flexibility. The best alternative is always to investigate the unexpected problem and claim for a better error presentation on the JVM.
 
Di Muso (del 31/05/2011 @ 09:50:59, in Informatica, linkato 2360 volte)
I think it's simply as the exception says: you don't have cglib jar in your classpath. Most probably you classes don't implement any interfaces and, by default, the spring aop tries to proxy your classes with cglib: http://static.springframework.org/sp...fb-proxy-types
 
Di Muso (del 24/03/2011 @ 20:00:00, in Informatica, linkato 7309 volte)
tratto da : toolbox.com

A collection of links to resources for DataStage for installing, using, getting certified, optimising and administering DataStage Server Edition, Parallel Edition for old and new versions.

I've done a lot of posts in the past about The Top 7 Online DataStage Tutorials and 7 Even Better Online DataStage Tutorials but in this post I bring it all together and update it with new links.  A few of the links below are for the DSXChange forum and the links work better if you have a login and set it up to remember your login when you visit.

Working with Databases from DataStage

  1. Oracle Performance Tuning - Bulk / Direct / OCI / Updates  - Here is a golden thread from the deep archives of DSXChange from Ross Leishman on Oracle performance from DataStage with some timings and techniques for the common OCI load techniques plus some out of the box thoughts on Oracle External Tables in ETL and Partition Exchange Load.  Wow.  Since it's a forum the thread grows with feedback and extra tips.
  2. Datastage 7x Enterprise Edition with Teradata - Essential reading if you are striking out on your first DataStage Teradata expedition as Joshy explains the differences between the different Teradata load stages.  TPump may sound like something advertised in a spam email promising enlargement but it's just one of the ways to get data into Teradata very quickly.  Multiload and FastLoad may not be multi or fast in certain situations.
  3. Configure DB2 remote connectivity with WebSphere DataStage - IBM DeveloperWorks article, getting DataStage to connect to remote DB2 databases from a parallel job is a bit like spending an hour assembling IKEA furniture without instructions only to discover it's two pieces of furniture, not one, and they are both missing key pieces.  You need this guide.  Don't even try without it.  You'll never get those three weeks of your life back again.
  4. Multiple readers per node loading into ODBC - a deep analysis thread on DSXChange that examines impact of having multiple readers per node when reading a sequential file and writing to an ODBC database.  A good scenario for looking at parallel config files, job debug and the impact of nodes and partitions on performance.

 

DataStage Enterprise Edition Online Resources

  1. Create custom operators for WebSphere DataStage and Custom combinable operators for IBM WebSphere DataStage and DataStage Parallel routines made really easy - I've put these two DeveloperWorks articles together with the Joshy George blog post, they were written many months apart but they cover the ancient and mysterious art of creating custom parallel operators for DataStage.  See if you can spot the difference between custom operators and combinable operators .. then explain it to me.
  2. WebSphere DataStage Parallel Job Tutorial Version 8  - The Official DataStage tutorial from IBM.  This is the one that comes on the Information Server installation and is available from the IBM publications centre and you can download it free of charge as a 1.05M PDF file.  If you cannot do the tutorial you can at least read the tutorial.
  3. Modify Stage with Andy Sorrell  - Andy from the DSXChange steps through using the parallel Modify Stage.  There are no hints inside this stage - no drop down menus and no help screens and you can't chain Andy to your computer so the next best thing is watching his video.
  4. End of the Road to DataStage Certification - Minhajuddin has a blog with some good posts on how to prepare for and pass the exam with his own experiences and how he passed with a better score than me!
  5. DataStage Certification - how to pass the exam - My tips on passing the exam.  What to study and what to expect.  Keep an eye out for a DataStage 8 certification exam in the next couple months.
  6. DataStage tip: using job parameters without losing your mind and 101 uses for ETL job parameters- is my early look at DataStage parameters, since updated with version 8 Parameter Sets in more recent blog posts.
  7. Shell script to start a job using dsjob - From Ken Bland on the forum DSXChange comes a full Unix shell script for running a DataStage job from an enterprise scheduling tool.  It was written for DataStage 5 but it should still work for DataStage 8!  It fetches parameter values from a parameter ini table - something you could replace with a Parameter Set in the latest version, it retrieves more dynamic processing parameters from a database table.  It shows how to check the status of a job after it finishes.
  8. Find and remove "orphan" Dataset files - from the DSXChange FAQ forum comes a way to get rid of unwanted datasets.  These things get created by production jobs, and then the jobs get decommissioned or the dataset name changes but the old datasets stay there taking up space and polluting the ozone layer.
  9. Duke Consulting Tips & Tricks - Kim Duke Consulting brings DataStage tips and utilities.  There is ETLStats - a set of DataStage components that lets you load operation DataStage metadata (job run times and stage/link row counts and parameter values) into a simple database schema.  You can get the ETLStats pack and a video on how to install it.  Some DataStage Server routines and Unix scripts and an XML Best Practices sample.
  10. I need a DSX-Cutter - Want to put your DataStage jobs into source control?  You need a DSX-Cutter.  It takes a very large DSX export file and cuts it into individual jobs for checkin.  This thread of DSXChange got an amazing 48 replies and has the code for a DSX-Cutter that should be compatible with version 8.  You wont need it for much longer - a future version of DataStage promises a version control API layer.
  11. A flexible data integration architecture using WebSphere DataStage and WebSphere Federation Server and Access application data using WebSphere Federation Server and WebSphere DataStage - Both these tutorials combine Federation Server and DataStage.  The second one also shows the SAP R/3 Pack for DataStage.  Got some scenarios for using ETL and federation together.
  12. How do I suppress a warning message? - the first time you run a parallel job you may have a heart attack from all the warning messages.  Relax - it's just being picky about metadata.  This DSXChange FAQ shows how to remove the warning messages.
  13. 10 Ways to Make DataStage Run Slower - my blog post using reverse psychology to find ways to use DataStage better by making it run slower.
  14. Sorts to the left of me, sorts to the right - when you get into big data and parallel jobs you need to concentrate a lot more on your sorting.  Have you got secret sorts creeping into your job without you knowing?  Can you pre-sort and if so how do you stop your job from re-sorting.

 

DataStage Real Time

It's time to get real.  DataStage 7 had the Real Time Services, DataStage 8 has the Information Services Director.  They both do the same thing - they turn DataStage and QualityStage jobs into an always on job - a web service - an enterprise java bean - an SOA enabled job.  It's becoming a lot more important with operational Master Data Management.

  1. Getting Started with IBM WebSphere Information Services Director.  A LeverageInformation technical tip via a flash video on how to turn a DataStage or QualityStage job into a web service.
  2. Transform and integrate data using WebSphere DataStage XML and Web services packs - This Developerworks article shows how to get the XML input and output stages up and running in a server job.  It uses Server Jobs to read XML but it's virtually identical to how XML is used in Parallel Jobs.  A Server Job finds the XML using the Folder stage to pass the data to the XML Input stage.  A parallel job uses the Sequential File stage instead with a file mask option to pass XML through to the XML Input stage.  Different stages but same method.  The DataStage Real Time Pack or the Service Director both turn Parallel and Server jobs into web services the same way.
  3. Handling Nulls in XML Sources - the DSRealTime blog looks at nulls in XML - is it null or is it just missing and did the world really deserve XML?  Google are trying to supplant XML it but Ernie helps explain it.
  4. How to Invoke Complex Web Services - this section of links would be really lame without the DSRealTime blog.  In depth entry on complex web services  - arrays, security, SOAP headers and embedded XML.

 

New to DataStage 8

This category is for people who are new to version 8 and want to know how to use the new features.

  1. Version 8.0.1 Installation - this DSXChange forum thread takes you through the eleven steps, and dozens of sub tasks, in an Information Server install on Solaris that's good reading for any Unix/Linux install.  More comprehensive than the Install and Upgrade Guide.
  2. The DataStage 8 Server Edition SuperFAQ - are you on DataStage version 7 and pondering version 8?  This FAQ lists the questions you might have with some answers.
  3. IBM Information Server Using Slowly Changing Dimensions in IBM WebSphere DataStage Projects - The DSXChange forum has an interesting thread about whether parallel jobs or server jobs are better at slowly changing dimensions.  Check out this LeverageInformation tutorial for the new DataStage 8 parallel job stage and read the thread for the server job approach.  A lot of dimensions have smaller volumes of data so Server jobs are an option.
  4. IBM Information Server: Setting up IBM WebSphere DataStage Users and IBM Information Server: Using Groups to Simplify User Administration with the Internal Directory  - I've put these two LeverageInformation tutorials together for obvious reasons.  You have several security options with DataStage 8 about where your users and passwords are setup and maintained. 
  5. How to Create, Use and Maintain DataStage 8 Parameter Sets  - This is a three part series I wrote about the new Parameter Sets in DataStage 8 and how they interact with Environment Variables and User-Defined Environment Variables.  You need to know this before you start using version 8 because job parameters are a pre-requisite of a well designed job.
  6. DataStage 8 Tutorial: Using Range Lookups - From the people who brought you Parameter Sets (me) comes Range Lookups.  Another walk through that will have you on the edge of your seat.  Range lookup functionality is now built into the Lookup stage - finally!
  7. Navigating the Many Paths of Metadata in the Information Server and Using IBM WebSphere DataStage to Import Metadata into the IBM WebSphere Metadata Repository - In DataStage 8 you have many ways to import metadata.  DataStage Designer table import, the import/export manager and the Information Analyzer console.  This post tries to make some sense of it all.
  8. Using WebSphere DataStage with IBM DataMirror Change Data Capture - DataMirror and DataStage now play together.  You can set up a DataMirror Transformation Server replication that feeds straight into a DataStage job for easier log based change data capture.  This white paper takes a deeper look at it.
  9. DataStage 8 Tutorial: Surrogate Key State Files - looking under the covers at the new state file functionality in DataStage 8 that lets you parallel jobs increment surrogate keys across partitions and remember values between job executions.

 

Back to the Future - DataStage Server Edition

I've put the DataStage Server tutorials in a special category since this version is still alive and kicking and some would argue it remains better than the upstart parallel version.

  1. Video FAQ's with Ken Bland - This is a great guide for navigating around DataStage on Unix or Linux, it was written for Server Edition version 7 and earlier but it's still got some relevant information for DataStage 8 parallel edition.
  2. MQSeries…Ensuring Message Delivery from Queue to Target - a post from the DSRealTime blog about how to make sure a message from a queue makes it through a DataStage job into a database schema without losing any bits.
  3. ETL Tools Datastage tutorial and training - This is a big tutorial website with lots of HTML pages on Server Edition tasks: designing jobs, reading sequential files, performing lookups, slowly changing dimensions.
  4. Server to Parallel Transition Lab with Ray Wurlod - Some would argue the best server job tutorial is that one that lets you leave server jobs behind!  This DSXChange Learning Centre tutorial looks at the difference between the versions and how to move to the newer edition.
  5. Using hash files instead of UV tables for multirow - Roll up your sleeves and get into server edition multi row lookups from hash files.  A master class from Ken Bland on DSXChange.  Very useful when doing slowly changing dimension lookups.  Also see a similar thread Hash Files & Slowly changing dimension.
  6. Upgradation & Migration - A DSXChange forum thread where Kim Duke provides a great guide on upgrading DataStage version 7.x and earlier.  Still useful for the DataStage server upgrade to version 8. It comes with a handy script for backing up crucial DataStage files before the upgrade.

It's got to be Red

IBM RedBooks have their own special category because they are so fricken huge.  If all the DataStage Redbooks were stacked on top of each other they would be wafer thin since they are in digital format but if you printed them out they would be a couple thousand pages high.  Inside each RedBook is a section of DataStage theory followed by a real world scenario.  It's very good reading for DataStage certification.

  1. SOA Solutions Using IBM Information Server - Shows some SOA scenarios using DataStage and the Federation Server.
  2. IBM InfoSphere DataStage Data Flow and Job Design - This one comes with some DataStage design recommendations and a great retail scenario.
  3. Deploying a Grid Solution with IBM InfoSphere Information Server - This one is all about deploying DataStage onto a RedHat grid but there are some lessons in there for any type of grid.
Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.
 
Di Muso (del 24/03/2011 @ 15:18:38, in Informatica, linkato 3787 volte)
If you are trying for run STS (SpringSource Tool Suite) after a good installation and you are receiving always the same error message :
Could not create the Java virtual machine
or
Java was started but returned exit code=1

I changed my heap size down to 512M from 768M in STS.ini under STS installation directory . . . . and things worked just fine!!!!
 
Di Muso (del 04/03/2011 @ 14:42:56, in Informatica, linkato 2153 volte)
Molti si chiederanno come poter inserire codice HTML nei vari post senza incasinare il tutto.
Gli utenti di dBlog non fanno eccezione, fortunatamente il Web, il mitico WWW ci viene in aiuto.
Ecco due servizi che parsano il vostro codice e vi generano del testo opprtunamente modificato da copiare e incollare dove ritenete necessario.

Blogcrowds

Simplebits

 
Di Muso (del 14/02/2011 @ 20:00:00, in Informatica, linkato 2869 volte)
tratto da : capitanfuturo

Oggi sono un pò "cattiverioso" come dice il caro Marco e vi racconterò di una bella storia da buttare nel cestino, in seguito all'attento lettore sarà tutto più chiaro. I protagonisti di questa storia informatica sono uno gnomo, un koala che ogni sei mesi si traveste da animale diverso e un pò di utenti irriverenti ma non per questo meno divertenti.
La storia ha inizio quando l'utente umano carica una pen-drive, ci lavoricchia un pò, cancella dei file e quando è tempo di andarsene prova a smontare il dispositivo con annessa finestra della discordia:
Cosa vi aspettate che faccia questa dialog? Sinceramente mi aspetto esattamente quello che fa: prende onestamente i file contenuti nel cestino e li elimina. L'inghippo sta nel fatto che per l'utente non è chiaro che i file in questione non sono quelli del cestino dell'utente ma di un cestino, una directory di nome .Trash-1000 che viene creata e gestita sul dispositivo montato. La directory è nascosta in Linux mentre in Windows no.

Attuale work around
Mi sembra onesto e plausibile come comportamento: predispongo un sistema sicuro sul device e poi sarà l'utente a scegliere: se non si vuole spostare i file nel cestino li si può sempre eliminare con la combinazione di tasti SHIFT + CANC oppure abilitare nel menu contestuale di Nautilus l'icona per l'eliminazione.
Se siete interessati a quest'ultima soluzione potete accedere a Nautilus (l'esploratore dei file in Gnome), dal menu Modifica selezionare la voce preferenze. Si apre una finestra come quella sottostante. Dalla scheda Comportamento abilitate il flag Includere un comando «Elimina» che scavalchi il cestino.
A questo punto per ogni file con il tasto destro, dal menu contestuale comparirà la voce "elimina..."

Si può far meglio!
Tornando a noi, cerchiamo di capirci qualcosa cercando tra i vari issue tracker di Gnome e Ubuntu. Il punto della diatriba è che queste cartelle nascoste vengono sempre generate e si accumulano nel dispositivo come mi capita giornalmente quando sono a casa.
Perchè non eliminarle?
Secondo la risposta di Sebastien Bacher: bug 362050 Empty trash' on flash drive leaves unnecessary '.Trash' folder commento 3, la cosa non viene fatta di default perchè in alcuni file system non si hanno i permessi per poter scrivere e il malcapitato utente dovrebbe crearsi manualmente tutte le volte la cartella del cestino per poterla usare e poi cita il comportamento da prendere come esempio: il caso di Windows che come sempre fa quello che vuole e nessuno può aprir bocca. No comment.
Did you read my comment before? there is filesystem where you will have no such directory until you create one manually to say that you want one, if the system was to delete if for you every time you unmoun the drive you would have to do this work every time you plug the key, not really a win for users, the microsoft folder is already available under linux and nobody at microsoft will modify their os to delete their special directory that's not much different, that seems extra trouble, work and issue for a small cosmetic change
Ma vediamo un pò come questo comportamento viene recepito dall'utente umano. Cito un commento (non chiedetemi di tradurre...) dal bug di Launchpad correlato #12893 Shouldn't put .Trash-$USER on removable devices dove viene richiesto esplicitamente di non gestire proprio il cestino nei dischi removibili... praticamente una non soluzione:
Please, remove the creation of the .Trash-$USER folder in removable drives, completly, is an incredible pain in the ass for human users.
Thanks
Ma perchè se nella maggior parte dei casi sarebbe auspicabile poter eliminare tutto non lo si può fare in modo automantico? Se me la devo creare a mano, allora sicuramente sono un utente a conoscenza del problema e vorrei semplicemente avere la possibilità di scegliere cosa fare come viene proposto nella descrizione di questo bug #138058
I use nautilus to move pictures from my digital camera to my computer. Several times after deleting items on the camera, I have forgotten to empty the trash, so a large fraction of the memory stick is filled with a .Trash folder that I can't remove with the camera's file managment functions. It would be nice if nautilus would ask before unmounting a removeable device with files in the trash. What's needed is a dialog asking something like "There are 24 files totalling 30 megabytes in the trash on /mnt/digicam. Do you want to delete those files now? (Delete Files) (Keep Files) (Don't Unmount)".
Giusto, ho bisogno di un pulsante in più, una nuova funzionalità che mi chieda di rasare via tutto se lo voglio, ma questa funzionalità sarà disponibile in GNOME 2.16 come domandava tomi nel lontano 2006 sempre nello stesso bug al commento 24?
tomi 2006-10-20 10:05:31 UTC Comment 24
Is this funcion now added in GNOME 2.16 or not
Sven Herzberg [developer] 2006-10-20 16:37:39 UTC Comment 25
i don't think so
tomi 2006-10-20 17:46:05 UTC Comment 26
great! Do they even do anything smart with their releases
e vissero felici e contenti!
 
Di Muso (del 27/01/2011 @ 12:24:43, in Informatica, linkato 2662 volte)
if u are trying to run or debug a very simple application to get an xml from eXist db and you receive this error:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/ws/commons/serialize/DOMSerializer

this because u are following this guide : http://exist.sourceforge.net/deployment.html where in the point

5. Embedding eXist in an Application

is missing that you must import this .jar too : ws-commons-util-1.0.2.jar
 
Di Muso (del 17/01/2011 @ 18:58:09, in Informatica, linkato 3312 volte)
Se avete problemi di autenticazione quando siete dietro un proxy, malgrado il settaggio del "System Proxy" ubuntu,
vi consiglio di modificare il file /etc/apt/apt.conf, ricordatevi che potete modificarlo solo con l'utente amministratore quindi con "sudo vi /etc/apt/apt.conf"
modificandolo inserendo user e pwd direttamente, quindi da cosi':

Acquire::http::Proxy "http://proxy:port";
Acquire::ftp::Proxy "ftp://proxy:port";

a cosi':

Acquire::http::Proxy "http://username:password@proxy:port";
Acquire::ftp::Proxy "ftp://username:password@proxy:port";
 
Di Muso (del 13/01/2011 @ 10:50:31, in Informatica, linkato 3480 volte)
from jugsalerno

Per Linux:
Nel file startup.sh sostituire la seguente riga :
exec “$PRGDIR”/”$EXECUTABLE” start “$@”

con:

JPDA_TRANSPORT=”dt_socket”
JPDA_ADDRESS=8000
exec “$PRGDIR”/”$EXECUTABLE” jpda start “$@”

Per Windows:
Nel file startub.bat sostituire la riga:
call “%EXECUTABLE%” start %CMD_LINE_ARGS%

con:

SET JPDA_TRANSPORT=”dt_socket”
SET JPDA_ADDRESS=8000
call “%EXECUTABLE%” jpda start %CMD_LINE_ARGS%

Per entrambi i sistemi operativi non sono necessarie le prime due righe, ma vanno inserite se non si vuole che vengano considerati i valori di default che sono quelli usati nell’esempio.

Per entrambi i sistemi operativi per utilizzare Eclipse in debug bisogna:

1) Aprire Eclipse (e questo mi sembra il minimo ;) )
2) cliccare sul menù RUN, poi su DEBUG
3) Selezionare la voce REMOTE JAVA APPLICATION, cliccare col pulsante destro su di essa e selezionare NEW
4) Nella prima scheda (CONNECT) si deve selezionare un progetto, assegnare un nome ed assegnare le proprietà della connessione che sono l’indirizzo e la porta dell’applicazione, che può essere anche localhost nel caso in cui si voglia testare applicazioni web in locale.
5) Mentre nella seconda scheda (SOURCE) inserire i sorgenti di cui si vuole eseguire il debug. Il progetto selezionato nella prima scheda viene incluso in automatico e lo trovato espandendo la cartella Default nell’albero presente nella scheda.
6) Premere su APPLY.
7) Premere DEBUG per iniziare.
Ora possiamo utilizzare normalmente le funzionalità di debug (inspect, breackpoint, evalutate, etc…) di Eclipse come se la nostra applicazione fosse in locale.

 
Pagine: 1 2 3 4 5 6


Blog ON-LINE dal 01/06/2007 
Sono state qui' 4518 persone
Sono state viste 6828 pagine
Oggi ho ricevuto 0 visite
Ieri ho ricevuto 1 visite
 
Locations of visitors to this page
 
Add to Technorati Favorites

Ci sono 536 persone collegate

< gennaio 2019 >
L
M
M
G
V
S
D
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
             

Cerca per parola chiave
 

Titolo
Blogosfera (11)
Eventi (12)
Films (2)
Gossip Corner (4)
Informatica (51)
Lo sapevi che (49)
Personali (24)
Politica (39)
Racconti (16)
Recensioni (2)
Societa' (6)
società (11)
Spazio DIL (2)
Sport (8)
Tecnologia (21)
Top 5 (3)
Vendesi (4)

Gli interventi più cliccati

Ultimi commenti:
Chiamo Antonella Gro...
02/05/2018 @ 13:43:31
Di Antonella
Chiamo Laura Luzchni...
16/04/2018 @ 18:03:13
Di Laura Luzchniak
Avete bisogno di ass...
03/04/2018 @ 18:56:10
Di gerard

Titolo
Eventi (16)
Header (7)
Informatica (3)
Personali (23)
Sport (14)
Vendesi (7)

Le fotografie più cliccate

Titolo
Ti piace questo blog?

 Fantastico!
 Carino...
 Così e così
 Bleah!

Titolo

Listening
Vasco
Ligabua
Doors
Caparezza
U2
Rolling Stones
Beatles

Reading
Franceschi - Fiabe della Bouna Notte
Ken Follett

Watching
Diamanti di Sangue - Blood Diamond
L'ombra del potere - The Good Shepherd



Errore Flickr RSS.




18/01/2019 @ 21:52:41
script eseguito in 479 ms


Visitor locations