logs archiveIRC Archive / Freenode / #oracle / 2010 / February / 27 / 1
idletask
Hmm, no one seems to be here atm :p
RJarett
nope
idletask
If I put a tablespace in nologging mode, what does that say in tables in that tablespace?
I think I should go this way, and only then enable logging on the tablespace :/
For some reason, an outburst generated 5 GB or redo/arch in no time!
RJarett
idletask: theres many tasks that redo must be generated on
if you are in nologging and do not specify append redo will generate
even fi you are in noarchive mode redo can be generated
idletask
Yes, but how can 5 GB redo/arch be generated in less than 5 minutes when importing in tables that have been explicitly changed to be nologging, LOB columns included? :/
RJarett
because you didnt specify direct or append
idletask
No, I didn't, it's a network dp import
RJarett
why would that matter?
plus why do you care if there is redo generated?
idletask
I created the tables first, via metadata_only, then put all of them in nologging mode, lob columns included, then only imported the table data
It all went fine, until that particular point where a mass of redo was generated for no reason that I can see :/
         

RJarett
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869
idletask
Well, because this is 45 GB that is imported
I don't wanted 45 GB redo generated :p
And that's not the biggest I'll have to do
Rudemeister
hi
RJarett
remove all the indexes from the destination table
idletask
RJarett: they are deactivated at insert time
RJarett
deactivated or removed
idletask
Oh, err, wait
Oh f*
(Action) descs user_indexes
RJarett
wrong user you are looking at?
idletask
No
Aieeee
All indexes have logging turned ON
And I was wondering about redo logs
(Action) bangs head against wall
RJarett
even if you specify nologging youll generate redo
idletask
(Action) is an ├╝ber dork
Yes, for the dictionary and other maintenance stuff, but I don't care
What I don't want is _data_ redo until data import is fully done
That may also explain the abnormal amount of undo generated
Well, I'll wait until that import is over and start again
(Action) really wonders why on earth data pump estimate takes so long over a network link
RJarett
damn it. upgraded a DB to 11R2 and now a ton of my queries dont work due to null dates. which means that higher up queries are getting no data returned where in 10 they did
this will be debugginghell
idletask
Well, you did debug this down to null dates
(Action) thinks he should create the tablespace nologging and alter it logging after import :/
It's the impdp generated table that generates redo now :(
RJarett
common.db.DatabaseException: java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-01405: fetched column value is NULL
not hard to debutg that ;)
scratchme
Hi all. Is there a simple way to have an on-demand job, and activate the job to be scheduled immediately?
Basically I want to enable an update process that takes anywhere from several seconds to 10 mins or so from a web page, but I want to return after enabling it.
I created a job w/o a schedule that does what I want.
Ahh, looks like what I want is an optional param for DBMS_SCHEDULER.RUN_JOB
Thanks.
idletask
256 MB redo logs generated in 15 minutes with all possible objects modified as nologging :(
That is TOO MUCH
I really hope creating the tablespace itself as nologging will make a difference, otherwise I won't know what to do :(
BlkPoohba
i need help with this error
error: unpacking of archive failed on file /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/config/seeddb/xeseed.dfb;4b8884af: cpio: read
this is when installing with yum
RJarett
is the file corrupt?
BlkPoohba
i have no idea. how would I know?
is this file suppose to be there before oracle is installed?
RJarett
there is usually an md5sum value on the page you downloaded the original file from
         

BlkPoohba
oh. you mean is the rpm corrupt? or that file that is already on the system?
RJarett
its in the rpm
run cksum against the file
BlkPoohba
ok. thanks.
RJarett
http://www.oracle.com/technology/software/products/database/xe/htdocs/102xelinsoft.html
if it doesnt match the cksum value there your download was bad
why arent you installing with rpm -ihv
BlkPoohba
ok. thats kinda odd though. I installed it, forgot the system password, uninstalled and tried to reinstall and then got this error
RJarett
hmm
BlkPoohba
i figured yum would get the dependencies... which it did need one at the time
RJarett
you could have just logged in with / as sysdba
without a password as the oracle user :P
BlkPoohba
i tried that and it didn't work.
when trying to connect as sys I get ORA-01033: ORACLE initialization or shutdown in progress
RJarett
shutdown abort
BlkPoohba
when 'connect / as sysdba' ORA-01031: insufficient privileges
i just installed it
trying to connect as root and getting ORA-12154: TNS:could not resolve the connect identifier specified
RJarett
kill 9 all the oracle procs
BlkPoohba
kill -9 ... not services oracle-xe stop ?
RJarett
you already ganked it up beyond whats good
now you just want to kill it off
BlkPoohba
ok. but all i did was install and run the configure command. I did nothing else except try to log in.
RJarett
and in that short time you forgot the system password?
BlkPoohba
I didn't forget. It just isn't taking what I made it
I used numbers and letters and it was 8 chars
ok I kill -9 all of them
RJarett
remove the package
BlkPoohba
uninstall it again? use yum or rpm -e?
RJarett
rpm
BlkPoohba
done
glaksmono
anyone here?
helppppppppppppppppp
Dormir
ask your question. if someone can help, they will
Rudemeister
uln sucks
hell
hello. does anybody use java stored procs with in raw parameters?
i have some cutoff a tail of raw data.
Rudemeister
hell: sorry cant help
hell
i just send recieved data to udp, without any movements, but get a cutted string, viewed by tcpdump.
Ha. maybe tcpdump problem) i'll check.
yes, i'm sorry. it's not oracle/java related.;
idletask
(Action) wakes up to redo logs problems again
glaksmono
anyone here?
helppppppppp
idletask
Just ask your question
Is estimate=statistics faster than estimate=blocks on data pump import?
I find blocks estimate to be very slow - more than 5 minutes to tell me that 40 GB of data need to be imported :(
TheBonsai
idletask: i'd expect it to be faster, yes
idletask: but less exact
idletask
Hmmyeah, it is so inexact that it doesn't even tell about LOBs :/
I wonder whether the "compute statistics" has influences on redo generation - which I still try to minimize for the initial data import
TheBonsai
the statistics thing is just reading the statistics it exported before i think
(the normal DBMS_STATS statistics inside the database are part of the export)
for the initial import, can't you set tablespace/whatever NOLOGGING?
idletask
Yes I do
But in spite of that, there is still a LOT of redo generated
Close to 1 GB right now :(
The tablespace is nologging and I checked that all tables, LOB columns and indexes are also nologging
TheBonsai
and all your objects are in this one TS?
idletask
Yes
TheBonsai
*scratches head*
shouldn't generate much redo, just for dictionary and stuff i'd have guessed
idletask
There is 500k to 1M redo generated per second
TheBonsai
but okay, you generate 1GB instead of 40GB redo
idletask
The import is not over, far from it
TheBonsai
what's your problem with the (now small) generated redo?
speed?
idletask
No, space
TheBonsai
redo logs don't take more space if you generate more redo, unless you archive
idletask
I do archive
And that's the problem
TheBonsai
what about disabling archiving?
idletask
I can do it right now but I won't be able to afford it in production
TheBonsai
for the initial import?
(i mean only for the import)
idletask
The problem is that I will have several schemas to import live
TheBonsai
hm.... hm....
hm
are transportable tablespaces an option?
idletask
No, not enough storage on the SAN :(
Otherwise I'd have gone this way for sure
TheBonsai
there's a hidden parameter that disables logging, but i guess you need to restart the instance in this case
idletask
I'm kind of convinced it's the dictionary stuff that generates all the logs, but the hell if I know why
LOB columns probably don't help :/
TheBonsai
and this isn't good anyways. you loose all your ability to recover without the logs. i less mean the imported data (which are nologging for you it seems), i'm more afraid about the dict, yes
idletask
(Action) thinks he's gonna push for SAN space, he really needs it, it seems
TheBonsai
what about extending the space? if you use ASM it's just a matter of asking the storage admin and extending the diskgroup
ha, 2 idiots, one identical idea ;-)
idletask
It looks however like specifying big amounts of space in "next" for datafiles generates less redo
Maybe I should use a blocksize of 32k instead of 16k...
TheBonsai
i don't think so. even 16K are unusual for OLTP scenarios
idletask
I have to, otherwise the database generation fails because of dbvault (ORA-01450: maximum key length (string) exceeded)
TheBonsai
eew
idletask
And there there are LOBs
Maybe I should store LOBs in another tablespace...
TheBonsai
if you can influence the schema, you can try a few things, yes
idletask
Oh yeah, I forgot, I cannot use tts at all, because of TDE :(
We don't have the key for the wallet
Argh
And the biggest schema is 185G
40G is small compared to that :/
TheBonsai
the first thing you need is storage i guess :)
idletask
Yep
And something is definitely missing from storage technology
TheBonsai
what?
idletask
The ability for filesystems to communicate with the backing store, and to tell it that "I don't need these blocks anymore"... This would allow thin provisioning SANs to be even more space efficient
Right now, if you write 20 GB and erase 15 at the fs level, there are still 20GB taken on the SAN
TheBonsai
yes, because the SAN provides space for the filesystem, not for files
the filesystem still has this size
s/SAN/storage/ # to be exact..
idletask
Yes but look at what happens with memor
memory
Overcommitting is not a problem
You can already overcommit on thin provisioning SANs
But while freed memory is definitely free, this is not the case for storage
TheBonsai
the memory is not provided by a system that has no clue about the inside mechanisms, the memory is under the OS control
if the storage system was under OS control (something like what you said) it would be possible, yes
if you use ASM 11.2 with acfs, you could share fs and database storage
idletask
I use LVM, it's flexible enough for my needs
Especially since I can add/remove LUNs (therefore PVs) live, and resize LVs live as well
TheBonsai
for the database storage to be flexible, you don't need LVM
idletask
Yes, but I'm not a fan of ASM ;)
TheBonsai
well, it takes much complexity out of the scenario. and it avoids the dependency to a filesystem and its features and limits, if any
idletask
I prefer storage to be under the control of the OS
TheBonsai
yea, i can understand you. but since at least a year ot two, ASM is rock solid (11g or latest 10.2.0.4)
let's say it "just works".
anyways, i'm afk for a while
idletask
Have fun
TheBonsai
see you (good luck)
idletask
Does LOB deduplication have an influence on redo log generation?
OK, I'm pissed
I have done everything I can do to reduce the amount of redo logs generated during an import, and I'm getting nowhere :(
RJarett
buy more disks...
idletask
I don't hold the money bag...
Even with all tables, indexes and lobs set to nologging, I have anywhere from 300k to 2MB/s redo logs generated
ANd I've no f*ing idea _why_
OK, well, it can get down to 100k
But 100k?? Where does that come from? There's nothing else running on that instance
« prev 1 2 next »